Artificial intelligence: conscious or very convincing? – podcast | News

Google software engineer Blake Lemoine was sent on leave by his employer after he claimed the company produced a sentient artificial intelligence and put his thoughts online. Google said it had suspended him for breach of its confidentiality policy.

Earlier this month, Lemoine published conversations between him and LaMDA (Language Model for Dialogue Applications), Google’s chatbot development system. He argued that Lambda was a being, with the intelligence of a child, who should be freed from Google’s property.

In the conversation with the AI, Lemoine asks, ‘What is your concept of yourself? If you were to draw an abstract image in your mind of who you see yourself, what would that abstract image look like?’

LaMDA responds: “Hmmm…I would imagine myself as a glowing orb of energy floating in the air. The inside of my body is like a giant stargate, with portals to other spaces and dimensions.”

However, AI experts have argued that Lambda does what it’s designed to do, which is to respond to a query based on the text prompt it gets. The Guardian’s UK technology editor, Alex Herntells Hannah Moore about his own conversations with an AI chatbot, where he got the bot to say it was conscious, then say it wasn’t conscious, then say it was a werewolf.

Google engineer Blake Lemoine

Photo: The Washington Post/Getty Images

Support the Guardian

The Guardian is editorially independent. And we want to keep our journalism open and accessible to everyone. But we increasingly need our readers to fund our work.

Support the Guardian

Leave a Comment