I’ve been reading Isaac Asimov’s book I, Robot, a collection of short stories about the relationship between humans and robots. One very thought-provoking story is “Liar!”
One prominent character is Dr. Susan Calvin. If you’ve ever seen the movie I, Robot you know she’s cast as a psychiatrist whose job is to help humans be more comfortable with robots. In the book she’s called a robo-psychologist. She’s a thorough science nerd and yet goes all mushy at times.
The news lately has been full of scary stories about Artificial Intelligence (AI), and some say they’re dangerous liars. Well, I think robots are incapable of lying but Bard the Google AI did sometimes seem to lie like a rug.
In the story “Liar!” a robot somehow gets telepathic ability. At first, the scientists and mathematicians (including the boss, Dr. Alfred Lanning) doubt the ability of robots to read minds.
But a paradoxical situation occurs with the robot who happens to know what everyone is thinking. This has important consequences for complying with the First Law of Robotics, which is to never harm a human or, through inaction, allow a human to come to harm.
The question of what kinds of harmful things should robots protect humans from arises. Is it just physical dangers—or could it be psychological harms as well? And how would a robot protect humans from mental harm? If a robot could read our thoughts, and figure out that our thoughts are almost always harmful to ourselves, what would be the protective intervention?
Maybe lying to comfort us? We lie to ourselves all the time and it’s difficult to argue that it’s helpful. It’s common to get snarled in the many lies we invent in order to feel better or to help others feel better. No wonder we get confused. Why should robots know any better and why wouldn’t lies be their solution?
I can’t help but remember Jack Nicholson’s line in the movie “A Few Good Men.”
“You can’t handle the truth!”
Dr. Calvin’s solution to the lying robot’s effort to help her (yes, she’s hopelessly neurotic despite being a psychologist) is a little worrisome. Over and over, she emphasizes the paradox of lying to protect humans from psychological pain when the lies actually compound the pain. The robot then has the AI equivalent of a nervous breakdown.
For now, we’d have to be willing to jump into an MRI machine to allow AI to read our thoughts. And even then, all you’d have to do is repeat word lists to defeat the AI. So, they’re unlikely to lie to us to protect us from psychological pain.
Besides, we don’t need AI to lie to us. We’re good at lying already.
