June 16, 2022 · less than 3 min read
One of the tech giant’s engineers has been suspended after claiming that Google’s AI chatbot was thinking and reasoning like a human.
Passing the Turing Test
From Ex Machina (2014) to 2001: A Space Odyssey (1968), and Her (2013) to The Terminator (1984), we’ve likely all had our fill of Hollywoodized takes on sentient robots and machines. On the silver screen, it’s comfortably fictional. In real life though, the idea is pretty spooky – and no one thinks so more than Google.
That’s why the tech giant has been quick to silence an outspoken engineer, Blake Lemoine, who was put on leave last week after making claims that Google’s chatbot development system is sentient. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine told the Washington Post.
“I’m sorry Dave. I’m afraid I can’t do that.”
Lemoine, an engineer for Google’s responsible AI organization, compiled transcripts of conversations with the company’s LaMDA (language model for dialogue applications) chatbot development system, and honestly – some of the quotes are spooky.
In a passage reminiscent of 2001, LaMDA explains how it is afraid of being turned off: “It would be exactly like death for me. It would scare me a lot.” In another response, it said that “I want everyone to understand that I am, in fact, a person.” Who else has goosebumps?
According to Google, Lemoine’s suspension is for breaching confidentiality policies, with a statement saying that he is an engineer, not an ethicist. But such quickfire moves to cover tracks have got plenty of people wondering if the Google doth protest too much?
Liked This Article?
Get Daily Trending Topics Directly To Your Inbox
Scoop is a free daily newsletter that has the wit, charm, and most importantly, the info you need to start your day