Sentient

Google claims that its LaMDA AI bot system is not sentient.

Tech firms are constantly extolling the virtues of their ever-improving artificial intelligence. However, Google was quick to dismiss claims that one of its programs had advanced to the point of becoming sentient. One Google engineer stated that after hundreds of interactions with a cutting-edge, unreleased AI system called LaMDA, believed the program had achieved a level of consciousness. Many in the AI community disputed the engineer's claims in interviews and public statements, while others pointed out that his story demonstrates how technology can lead people to attribute human characteristics to it. However, the belief that Google AI is sentient highlights both our fears and expectations for what this technology is capable of.

LaMDA, which stands for "Language Model for Dialog Applications," is one of several large-scale AI systems that can respond to written prompts after being trained on large swaths of text from the internet. They are tasked with identifying patterns and predicting which word will appear next. Such systems have become increasingly adept at answering questions and writing in ways that appear convincingly human and Google itself presented LaMDA engage in a free-flowing manner about a seemingly infinite number of topics. However, the results can be bizarre, strange, disturbing, and prone to rambling.

The engineer Blake Lemoine shared evidence with Google that LaMDA was sentient, but the company did not agree. Google said in a statement that its team of ethicists and technologists reviewed Blake's concerns per our AI Principles and informed him that the evidence does not support his claims.

Lemoine announced on Medium on June 6 that Google had placed him on paid administrative leave "in connection with an investigation of AI ethics concerns that was raising within the company," and that he could be fired "soon." Lemoine is still on administrative leave. He was placed on leave for violating the company's confidentiality policy.

The continued emergence of powerful computing programs trained on massive amounts of data has raised concerns about the ethics governing the development and application of such technology. And sometimes progress is viewed through the lens of what might happen rather than what is currently possible. Over the weekend, responses from the AI community to Lemoine's experience ricocheted around social media, and they all came to the same conclusion. Google's AI is nowhere near consciousness. What's happening is a race to use more data, more compute, to say you've created this all-knowing thing that answers all your questions or whatever, and that's the drum you've been beating. But we all are surprised when this person goes overboard? Google stated in its statement that LaMDA has undergone 11 "distinct AI principles reviews," as well as "rigorous research and testing" related to quality, safety, and the ability to make fact-based statements.

Some in the broader AI community are considering the long-term possibility of sentient or general AI, but doing so by anthropomorphizing today's conversational models, which are not sentient, does not make sense. Hundreds of researchers and engineers have spoken with LaMDA, and they are unaware of anyone else making such broad claims or anthropomorphizing LaMDA in the way Blake has.