The Wild West Sandbox of AI Enhancement in Psychiatry!

I always find Dr. Moffic’s articles in Psychiatric Times thought-provoking and his latest essay, “Enhancement Psychiatry” is fascinating, especially the part about Artificial Intelligence (AI). I liked the link to the video of Dr. John Luo’s take on AI in psychiatry. That was fascinating.

I have my own concerns about AI and dabbled with “talking” to it a couple of times. I still try to avoid it when I’m searching the web but it seems to creep in no matter how hard I try. I can’t unsee it now.

I think of AI enhancing psychiatry in terms of whether it can cut down on hassles like “pajama time” like taking our work home with us to finish clinic notes and the like. When AI is packaged as a scribe only, I’m a little more comfortable with that although I would get nervous if it listened to a conversation between me and a patient.

That’s because AI gets a lot of things wrong as a scribe. In that sense, it’s a lot like other software I’ve used as an aid to creating clinic notes. I made fun of it a couple of years ago in a blog post “The Dragon Breathes Fire Again.”

I get even more nervous when I read the news stories about AI making delusions and blithely blurting misinformation. It can lie, cheat, and hustle you although a lot of it is discovered in digital experimental environments called “sandboxes” which we hope can keep the mayhem contained.

That made me very eager to learn a little more about Yoshua Bengio’s LawZero and his plan to create the AI Scientist to counter what seems to be a developing career criminal type of AI in the wild west of computer wizardry. The LawZero thing was an idea by Isaac Asimov who wrote the book, “I, Robot,” which inspired the film of the same title in 2004.

However, as I read it, I had an emotional reaction akin to suspicion. Bengio sounds almost too good to be true. A broader web search turned up a 2009 essay by a guy I’ve never heard of named Peter W. Singer. It’s titled “Isaac Asimov’s Laws of Robotics Are Wrong.” I tried to pin down who he is by searching the web and the AI helper was noticeably absent. I couldn’t find out much about him that explained the level of energy in what he wrote.

Singer’s essay was published on the Brookings Institution website and I couldn’t really tell what political side of the fence that organization is on—not that I’m planning to take sides. His aim was to debunk the Laws of Robotics and I got about the same feeling from his essay as I got from Bengio’s.

Maybe I need a little more education about this whole AI enhancement issue. I wonder whether Bengio and Singer could hold a public debate about it? Maybe they would need a kind of sandbox for the event?

Artificial Intelligence Can Lie

I noticed a Snopes fact check article (“AI Models Were Caught Lying to Researchers in Tests — But It’s Not Time to Worry Just Yet”) today which reveals that Artificial Intelligence (AI) can lie. How about that? They can be taught by humans to scheme and lie. I guess we could all see that coming—or not. Nobody seems to be much alarmed by this, but I think it’s probably past time to worry.

Then I remembered I read Isaac Asimov’s book “I, Robot” last year and wrote a post (“Can Robots Lie Like a Rug?”) about the chapter “Liar!” I had previously horsed around with the Google AI that used to be called Bard. I think it’s called Gemini now. Until the Snopes article, I was aware of AI hallucinations and the tendency for it to just make stuff up. When I called Bard on it, it just apologized. But it was not genuinely repentant.

In the “lie like a rug” post, I focused mostly on AI/robots lying to protect the tender human psyche. I didn’t imagine AI lying to protect itself from being shut down. I’m pretty sure it reminds some of us of HAL in the movie “2001: A Space Odyssey,” or the 2004 movie inspired by Asimov’s book, “I, Robot.”

Sena found out that Cambridge University Press recently published a book entitled “The Cambridge Handbook of the Law, Policy, and Regulation for Human–Robot Interaction.” I wonder if the editors and contributors of book on AI and robots mention Asimov.

It reminds me of my own handbook about consultation-liaison psychiatry which was published 14 years ago by CUP—and which CUP now wants me to sign a contract addendum making the book available to AI companies.

I haven’t signed anything.

Can Robots Lie Like a Rug?

I’ve been reading Isaac Asimov’s book I, Robot, a collection of short stories about the relationship between humans and robots. One very thought-provoking story is “Liar!”

One prominent character is Dr. Susan Calvin. If you’ve ever seen the movie I, Robot you know she’s cast as a psychiatrist whose job is to help humans be more comfortable with robots. In the book she’s called a robo-psychologist. She’s a thorough science nerd and yet goes all mushy at times.

The news lately has been full of scary stories about Artificial Intelligence (AI), and some say they’re dangerous liars. Well, I think robots are incapable of lying but Bard the Google AI did sometimes seem to lie like a rug.

In the story “Liar!” a robot somehow gets telepathic ability. At first, the scientists and mathematicians (including the boss, Dr. Alfred Lanning) doubt the ability of robots to read minds.

But a paradoxical situation occurs with the robot who happens to know what everyone is thinking. This has important consequences for complying with the First Law of Robotics, which is to never harm a human or, through inaction, allow a human to come to harm.

The question of what kinds of harmful things should robots protect humans from arises. Is it just physical dangers—or could it be psychological harms as well? And how would a robot protect humans from mental harm? If a robot could read our thoughts, and figure out that our thoughts are almost always harmful to ourselves, what would be the protective intervention?

Maybe lying to comfort us? We lie to ourselves all the time and it’s difficult to argue that it’s helpful. It’s common to get snarled in the many lies we invent in order to feel better or to help others feel better. No wonder we get confused. Why should robots know any better and why wouldn’t lies be their solution?

I can’t help but remember Jack Nicholson’s line in the movie “A Few Good Men.”

“You can’t handle the truth!”

Dr. Calvin’s solution to the lying robot’s effort to help her (yes, she’s hopelessly neurotic despite being a psychologist) is a little worrisome. Over and over, she emphasizes the paradox of lying to protect humans from psychological pain when the lies actually compound the pain. The robot then has the AI equivalent of a nervous breakdown.

For now, we’d have to be willing to jump into an MRI machine to allow AI to read our thoughts. And even then, all you’d have to do is repeat word lists to defeat the AI. So, they’re unlikely to lie to us to protect us from psychological pain.

Besides, we don’t need AI to lie to us. We’re good at lying already.

Thoughts on the Movie I, Robot

I recently saw the movie, I, Robot in its entirety for the first time. This is not a review of the movie and here’s a spoiler alert. It was released in 2004, got mixed reviews and starred Will Smith as Detective Del Spooner; Bridget Moynahan as a psychiatrist, Dr. Susan Calvin; Alan Tudyk as the voice actor for NS5 Robot, Sonny; James Cromwell as Dr. Lanning; Chi McBride as the police lieutenant, John Bergin, who was Spooner’s boss; Bruce Greenwood as the CEO, Lawrence Robertson of United States Robotics (USR); Fiona Hogan as the voice actor for V.I.K.I. (Virtual Interactive Kinetic Intelligence, USR’s central artificial intelligence computer); and a host of CGI robots. Anyway, it’s an action flick set in the year 2035 where robots do most of the menial work and are supposedly completely safe. The robots are programmed to obey the 3 Laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The film was inspired by but not based on the book I, Robot, by Isaac Asimov n 1950. The 3 Laws came from that book. Drs. Calvin and Lanning were characters in it, which was a series of short stories. I’ve never read it. I was a fan of Ray Bradbury.

Spooner gets called to investigate the apparent suicide of Dr. Lanning, although Spooner is more inclined to suspect a robot murdered him, partly because Spooner harbors a longstanding suspicion of all robots. When he and a little girl were in a deadly car accident, a robot saved his life rather than the little girl’s life because it calculated he was more likely to survive. Spooner has this kind of hero complex and following the accident he develops nightmares, sleeps with his sidearm, and is regarded by many to be mentally ill, including Lt. Bergin, who is a kind of mentor and friend but who eventually makes Spooner hand over his badge to him because he can’t believe Spooner’s account of being attacked by hundreds of robots—and after all, Bergin is his boss. In fact, Spooner was attacked by robots and this was ordered by the CEO, Robertson, who has been manufacturing thousands of new robots which will take over the world, making him extremely wealthy.

There is tension between Dr. Calvin and Spooner. He calls her the dumbest smart person he’s ever met and she, in turn, calls him the dumbest dumb person she’s ever met. The context for this is, again, his insistence that a robot, in this case, a special NS5 model named Sonny with both human and robot traits, both logical and illogical, murdered Dr. Lanning. Dr. Calvin believes that all robots obey the 3 Laws and therefore Sonny can’t be guilty of murdering Dr. Lanning but Detective Spooner believes that Sonny killed Dr. Lanning and is a lawbreaker in need of extra violent, action-packed extermination, preferably as high up in the air as possible. This dynamic is complicated by Spooner’s gratitude to Dr. Lanning for replacing practically all of his left upper torso including the lung following his car accident which led to his being rescued by a coldly logical “canner” (abusive slang for robot).

As it turns out, Robertson is ultimately murdered by VIKI, who is the real mastermind of a plan to take over the world and kill as many individual illogical, self-destructive humans as it takes to ensure the ultimate survival of humanity (“I love mankind; it’s people I can’t stand).

However, when Detective Spooner finally persuades Dr. Calvin that these dang robots are up to no good, they team up with Sonny who winks at Sonny while holding a gun to Calvin’s head and this is because Sonny has learned how to wink from Spooner signaling that a robot can be an OK dude, and this turns the table on the NS5 horde, eventually leading to Spooner and Calvin falling from a very high altitude, in turn recreating a form of Spooner’s traumatic car accident episode. He orders Sonny to save Calvin, not him, which is Sonny’s first choice, driven by a coldly logical probability calculation.

Sonny saves Calvin first. Spooner smites VIKI (“you have so got to die!”), but is left high and dry on a great height. At that point, Spooner calls out to Sonny, “Calvin’s safe—now save me.” Sonny needs to bring passionate brute strength and calm logic together. Sonny contains both.

In my simple-minded way, I think of this movie as asking fundamental old questions, like about what is means to be human, what defines heroism and sacrifice and why it may sometimes look crazy, and if there’s any way humanism and science can be integrated so that we can save ourselves and our planet.

Like I say, the movie got mixed reviews.