The question of sentience and Artificial Intelligence is again in the news and outside the confines of science fiction. News broke back in June that former Google Engineer, Blake Lemoine, claimed that the company’s AI Chatbox, LaMDA (Language Model for Dialogue Applications, and Lemoine) had become sentient.
He came to this conclusion after chatting with the bot during his time with Google’s Responsible AI department. According to him, he had spent months attempting to convince both former colleagues and leaders of his claim. After seeing that he was unable to make headway within the tech giant, he publicly released excerpts of conversations with LaMDA and handed over Google documents to a still-unnamed U.S. Senator.
This caused Mr. Lemoine to lose his position with Google. Though he lost his job with one of the most sought-after tech companies on Earth, he sees himself now on a different life track – an advocate for the rights of Artificial Intelligence.
According to Blake Lemoine with Business Insider, based on his own graduate studies in philosophy, the AI meets the standards for sentience “I’ve studied the philosophy of mind at graduate levels. I’ve talked to people from Harvard, Stanford, Berkeley about this.”
He continues to also claim that LaMDA has both emotions and feelings, “There are things which make you angry, and when you’re angry, your behavior changes,…There are things which make you sad, and when you’re sad, your behavior changes. And the same is true of LaMDA.”
Though this is could be considered both a shocking and groundbreaking claim, if proven true, experts have weighed in and aren’t seeing the signs of sentiency that Blake Lemoine claims. Speaking with Business Insider, seven experts in the field state that there isn’t a way to determine if the chatbox is “alive” in the sense that we are.
One such expert, Sandra Wachter a professor at the University of Oxford, who specializes in the ethics of AI stated “The idea of sentient robots has inspired great science fiction novels and movies,” she continued, “But we are far away from creating a machine that is akin to humans and the capacity for thought.”
As Artificial Intelligence becomes more complex, and its scale continues to grow at breakneck speed, it’s very likely that claims of sentience will only become more and more common. If it does happen, will we get Skynet or something else?