A Google engineer has reportedly been suspended by the company after claiming that an artificial intelligence he helped to develop had become sentient. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid,” Blake Lemoine told The Washington Post.
Lemoine released transcripts of conversations with the AI, called LaMDA (Language Model for Dialogue Applications), in which it appears to express fears of being switched off, talks about how it feels happy and sad and attempts to form bonds with humans by mentioning situations that it could never have actually experienced. Here is everything you need to know.
Advertisement
Is LaMDA really sentient?
In a word, no, says Adrian Weller at The Alan Turing Institute in the UK.
“LaMDA is an impressive model, it’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient,” he says. “They do a sophisticated form of pattern matching to find text that best matches the query they’ve been given that’s based on all the data they’ve been fed.”
Adrian Hilton at the University of Surrey, UK, agrees that sentience is a “bold claim” that isn’t backed up by the facts. Even noted cognitive scientist Steven Pinker weighed in to shoot down Lemoine’s claims, while Gary Marcus at New York University summed it up in one word – “nonsense”.
So what convinced Lemoine that LaMDA was sentient?
Neither Lemoine nor Google responded to New Scientist’s request for comment. But it is certainly true that the output of AI models in recent years has become surprisingly, even shockingly good.
Our minds are susceptible to perceiving such ability – especially when it comes to models designed to mimic human language – as evidence of true intelligence. Not only can LaMDA make convincing chit-chat, but it can also present itself as having self-awareness and feelings.
“As humans, we’re very good at anthropomorphising things,” says Hilton. “Putting our human values on things and treating them as if they were sentient. We do this with cartoons, for instance, or with robots or with animals. We project our own emotions and sentience onto them. I would imagine that’s what’s happening in this case.”
Will AI ever be truly sentient?
It remains unclear whether the current trajectory of AI research, where ever-larger models are fed ever-larger piles of training data, will see the genesis of an artificial mind.
“I don’t believe at the moment that we really understand the mechanisms behind what makes something sentient and intelligent,” says Hilton. “There’s a lot of hype about AI, but I’m not convinced that what we’re doing with machine learning, at the moment, is really intelligence in that sense.”
Weller says that, given human emotions rely on sensory inputs, it might eventually be possible to replicate them artificially. “It potentially, maybe one day, might be true, but most people would agree that there’s a long way to go.”
How has Google reacted?
The Washington Post claims that Lemoine has been placed on suspension after seven years at Google, having attempted to hire a lawyer to represent LaMDA and sending executives a document that claimed the AI was sentient. Its report notes that Google says that publishing the transcripts broke confidentiality policies.
Google told the newspaper : “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
Lemoine responded on Twitter: “Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.”
Topics: