A Google engineer claims an artificial intelligence (AI) chatbot he spent months testing is sentient, despite the company’s insistence that it is not. In an internal report shared with The Washington Post last week, Blake Lemoine, a senior software engineer with Google’s Responsible AI team, called the chatbot known as LaMDA (for Language Model for Dialogue Applications) “possibly the most intelligent man-made artifact ever created.”
“But is it sentient? We can’t answer that question definitively at this point, but it’s a question to take seriously,” Lemoine wrote in the report before sharing about 20 pages of question-and-answers with LaMDA on its self-reported sentience online. In this chat transcript, which he also published on Medium, he probed the chatbot’s understanding of its own existence and consciousness.
Lemoine says he decided to go public with these conversations after they were reviewed and dismissed by Google executives (The New York Times says “hundreds” of other Google researchers and engineers interacted with LaMDA and “reached a different conclusion than Mr. Lemoine did.”) He has since been suspended for violating the company’s privacy policy, according to The Washington Post.
LaMDA is built on neural network architecture that can synthesize large amounts of data, identify patterns, and then learn from what it has received. In the case of LaMDA, being fed an extensive amount of text has given the chatbot the ability to participate in “free-flowing” conversations, Google said in a press release last year about the model, calling it a “breakthrough conversation technology.” The Washington Post reports CEO Sundar Puchai said that same year there were plans to embed LaMDA in many of the company’s offerings, including Search and Google Assistant.
[Related: Why you can’t use Google’s impressive text-to-image generator Imagen yet]
Experts outside of Google have largely aligned with the company’s findings on LaMDA, saying current systems do not have the power to attain sentience and are rather offering a convincing mimic of human conversation as they were designed to do.
“What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean,” AI researcher and author Gary Marcus wrote in a Substack post citing a number of other experts.
There are immediate risks that come with training models like LaMDA, though, including “internalizing biases, mirroring hateful speech, or replicating misleading information,” as Google said in that 2021 press release. According to The Washington Post, Lemoine’s focus at the company has been on tackling some of these issues, having developed a “fairness algorithm for removing bias from machine learning systems.”
However, he is not the first Google employee working in this space to voice concerns about the company’s AI work—in 2020, two leaders of Google’s Ethical AI team said they were forced out after identifying bias in the company’s language models.