Google Suspends the Engineer Who Claimed LaMDA as ‘human’

Google

GoogleGoogle has placed an engineer on leave after he claimed the company’s AI is a sentience

Google has suspended one of its engineers after he claimed that one of the company’s experimental artificial intelligence Bot has gained sentience. The Alphabet-run AI development team put him on paid leave for breaching company policy by sharing confidential information about the project. Blake Lemoine, a senior software engineer at Google’s Responsible AI organization, claimed that the company’s most advanced technology, LaMDA, was sentient and had a soul. LaMDA stands for Language Model for Dialogue Applications. LaMDA is an internal Google Cloud system used to create chatbots that can mimic human speech. As per the tech giant, he violated the company’s confidentiality policy.

B.Lemoine had been put on administrative leave. Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has. B.Lemoine said several of the conversations with LaMDA convinced him that the system was sentient. He said he believed it had become a person and that it should be asked for consent on the experiments Google runs on it. B.Lemoine believes that LaMDA is a sentient being with the cognitive abilities of a child in terms of expressing thoughts and feelings. The 41-year-old said that LaMDA was able to have conversations with him about rights and personhood.

 

Google engineer claims LaMDA is sentient, gets fired:

As B.Lemoine talked to LaMDA about religion, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change B.Lemoine’s mind about Isaac Asimov’s third law of robotics. It is built by fine-tuning a family of Transformer-based neural language models specialized for dialog, with up to 137 billion model parameters, and teaching the models to leverage external knowledge sources.

While testing LaMDA to see if it was ever generated hate speech not uncommon for similar language models B.Lemoine started having extensive conversations with the bot. LaMDA’s claims to have feelings and emotions of joy, love, depression, and anger are simply the result of clever programming and machine learning algorithms. LaMDA has gone through 11 distinct AI Principles reviews, along with rigorous research and testing based on key metrics of quality, safety, and the system’s ability to produce statements grounded in facts.

For the past six months, LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person. B.Lemoine highlighted that his interactions with LaMDA led him to conclude that it had become a person that deserved the right to be asked for consent to the experiments being run on it. B.Lemoine also revealed that he was placed on paid administrative leave on June 6 for violating the company’s confidentiality policies. This is not the first time Google’s artificial intelligence department has been in a spot of trouble.