The Wild West Sandbox of AI Enhancement in Psychiatry!

I always find Dr. Moffic’s articles in Psychiatric Times thought-provoking and his latest essay, “Enhancement Psychiatry” is fascinating, especially the part about Artificial Intelligence (AI). I liked the link to the video of Dr. John Luo’s take on AI in psychiatry. That was fascinating.

I have my own concerns about AI and dabbled with “talking” to it a couple of times. I still try to avoid it when I’m searching the web but it seems to creep in no matter how hard I try. I can’t unsee it now.

I think of AI enhancing psychiatry in terms of whether it can cut down on hassles like “pajama time” like taking our work home with us to finish clinic notes and the like. When AI is packaged as a scribe only, I’m a little more comfortable with that although I would get nervous if it listened to a conversation between me and a patient.

That’s because AI gets a lot of things wrong as a scribe. In that sense, it’s a lot like other software I’ve used as an aid to creating clinic notes. I made fun of it a couple of years ago in a blog post “The Dragon Breathes Fire Again.”

I get even more nervous when I read the news stories about AI making delusions and blithely blurting misinformation. It can lie, cheat, and hustle you although a lot of it is discovered in digital experimental environments called “sandboxes” which we hope can keep the mayhem contained.

That made me very eager to learn a little more about Yoshua Bengio’s LawZero and his plan to create the AI Scientist to counter what seems to be a developing career criminal type of AI in the wild west of computer wizardry. The LawZero thing was an idea by Isaac Asimov who wrote the book, “I, Robot,” which inspired the film of the same title in 2004.

However, as I read it, I had an emotional reaction akin to suspicion. Bengio sounds almost too good to be true. A broader web search turned up a 2009 essay by a guy I’ve never heard of named Peter W. Singer. It’s titled “Isaac Asimov’s Laws of Robotics Are Wrong.” I tried to pin down who he is by searching the web and the AI helper was noticeably absent. I couldn’t find out much about him that explained the level of energy in what he wrote.

Singer’s essay was published on the Brookings Institution website and I couldn’t really tell what political side of the fence that organization is on—not that I’m planning to take sides. His aim was to debunk the Laws of Robotics and I got about the same feeling from his essay as I got from Bengio’s.

Maybe I need a little more education about this whole AI enhancement issue. I wonder whether Bengio and Singer could hold a public debate about it? Maybe they would need a kind of sandbox for the event?

Artificial Intelligence Can Lie

I noticed a Snopes fact check article (“AI Models Were Caught Lying to Researchers in Tests — But It’s Not Time to Worry Just Yet”) today which reveals that Artificial Intelligence (AI) can lie. How about that? They can be taught by humans to scheme and lie. I guess we could all see that coming—or not. Nobody seems to be much alarmed by this, but I think it’s probably past time to worry.

Then I remembered I read Isaac Asimov’s book “I, Robot” last year and wrote a post (“Can Robots Lie Like a Rug?”) about the chapter “Liar!” I had previously horsed around with the Google AI that used to be called Bard. I think it’s called Gemini now. Until the Snopes article, I was aware of AI hallucinations and the tendency for it to just make stuff up. When I called Bard on it, it just apologized. But it was not genuinely repentant.

In the “lie like a rug” post, I focused mostly on AI/robots lying to protect the tender human psyche. I didn’t imagine AI lying to protect itself from being shut down. I’m pretty sure it reminds some of us of HAL in the movie “2001: A Space Odyssey,” or the 2004 movie inspired by Asimov’s book, “I, Robot.”

Sena found out that Cambridge University Press recently published a book entitled “The Cambridge Handbook of the Law, Policy, and Regulation for Human–Robot Interaction.” I wonder if the editors and contributors of book on AI and robots mention Asimov.

It reminds me of my own handbook about consultation-liaison psychiatry which was published 14 years ago by CUP—and which CUP now wants me to sign a contract addendum making the book available to AI companies.

I haven’t signed anything.

We Are All Still Learning to Play Pong

I noticed an article the other day about Monash University in Australia getting funding for further research into growing brain cells onto silicon chips and teaching them how to play cribbage.

Just kidding, the research is for teaching the modified brain cells tasks. They succeeded in teaching them goal-directed tasks like how to play the tennis-like game Pong last year. You remember Pong from the 1970s? Shame on you if you don’t. On the other hand, that means you probably didn’t frequent any beer taverns in your hometown while you were growing up—or that you’re just too young to remember.

The new research program is called Cortical Labs and has hundreds of thousands of dollars in funding. The head of the program, Dr. Razi, says it combines Artificial Intelligence (AI) and synthetic biology to make programmable biological computing platforms which will take over the world and bring back Pong!

It’s an ambitious project. The motto of Monash University is Ancora Imparo, which is Italian for “I am still learning.” It links humility and perseverance.

There’s a lot of suspicion out there about AI and projects like the Pong initiative in Australia. It could eventually grow into a vast industry run by robots who will run on a simple fuel called vegemite.

Shame on you if you don’t know what vegemite is!

Anyway, it reminds me that I recently finished reading Isaac Asimov’s book of science fiction short stories, “I, Robot.”

The last two stories in the book are intriguing. Both “Evidence” and “The Evitable Conflict” are generally about the conflict between humans and AI, which is a big controversy currently.

The robopsychologist, Dr. Susan Calvin, is very much on the side of AI (I’m going to use the term synonymously with robot) and thinks a robot politician would be preferable to a human one because of the requirement for the AI to adhere to the 3 Laws of Robotics, especially the first one which says AI can never harm a human or allow a human or through inaction allow a human to come to harm.

In the story “Evidence,” a politician named Stephen Byerley is suspected of being a robot by his opponent. The opponent tried to legally force Byerley to eat vegemite (joke alert!) to prove the accusation. This is based on the idea that robots can’t eat. This leads to the examination of the argument about who would make better politicians: robots or humans. Byerley at one point asks Dr. Calvin whether robots are really so different from men, mentally.

Calvin retorts, “Worlds different…, Robots are essentially decent.” She and Dr. Alfred Lanning and other characters are always cranky with each other. The stare savagely at one another and yank at mustaches so hard you wonder if the mustache eventually is ripped from the face. That doesn’t happen to Calvin; she doesn’t have a mustache.

At any rate, Calvin draws parallels between robots and humans that render them almost indistinguishable from each other. Human ethics, self-preservation drive, respect for authority including law make us very much like robots such that being a robot could imply being a very good human.

Wait a minute. Most humans behave very badly, right down to exchanging savage stares at each other.

The last story, “The Evitable Conflict” was difficult to follow, but the bottom line seemed to be that the Machine, a major AI that, because it is always learning, controls not just goods and services for the world, but the social fabric as well while keeping this a secret from humans so as not to upset them.

The end result is that the economy is sound, peace reigns, the vegemite supply is secure—and humans always win the annual Pong tournaments.

Can Robots Lie Like a Rug?

I’ve been reading Isaac Asimov’s book I, Robot, a collection of short stories about the relationship between humans and robots. One very thought-provoking story is “Liar!”

One prominent character is Dr. Susan Calvin. If you’ve ever seen the movie I, Robot you know she’s cast as a psychiatrist whose job is to help humans be more comfortable with robots. In the book she’s called a robo-psychologist. She’s a thorough science nerd and yet goes all mushy at times.

The news lately has been full of scary stories about Artificial Intelligence (AI), and some say they’re dangerous liars. Well, I think robots are incapable of lying but Bard the Google AI did sometimes seem to lie like a rug.

In the story “Liar!” a robot somehow gets telepathic ability. At first, the scientists and mathematicians (including the boss, Dr. Alfred Lanning) doubt the ability of robots to read minds.

But a paradoxical situation occurs with the robot who happens to know what everyone is thinking. This has important consequences for complying with the First Law of Robotics, which is to never harm a human or, through inaction, allow a human to come to harm.

The question of what kinds of harmful things should robots protect humans from arises. Is it just physical dangers—or could it be psychological harms as well? And how would a robot protect humans from mental harm? If a robot could read our thoughts, and figure out that our thoughts are almost always harmful to ourselves, what would be the protective intervention?

Maybe lying to comfort us? We lie to ourselves all the time and it’s difficult to argue that it’s helpful. It’s common to get snarled in the many lies we invent in order to feel better or to help others feel better. No wonder we get confused. Why should robots know any better and why wouldn’t lies be their solution?

I can’t help but remember Jack Nicholson’s line in the movie “A Few Good Men.”

“You can’t handle the truth!”

Dr. Calvin’s solution to the lying robot’s effort to help her (yes, she’s hopelessly neurotic despite being a psychologist) is a little worrisome. Over and over, she emphasizes the paradox of lying to protect humans from psychological pain when the lies actually compound the pain. The robot then has the AI equivalent of a nervous breakdown.

For now, we’d have to be willing to jump into an MRI machine to allow AI to read our thoughts. And even then, all you’d have to do is repeat word lists to defeat the AI. So, they’re unlikely to lie to us to protect us from psychological pain.

Besides, we don’t need AI to lie to us. We’re good at lying already.

I’m Reading Isaac Asimov’s Book “I, Robot”

I just got a copy of Isaac Asimov’s book “I, Robot” the other day. I’ve been thinking about reading it ever since seeing the movie “I, Robot.” As the movie opens, you see the disclaimer saying that the movie was “…inspired by but not based…” on Asimov’s book of the same name.

In fact, the book is a collection of short stories about robots and in the first one, entitled “Robbie” I saw the names of several characters who were transplanted from the book into the movie, Susan Calvin (the psychiatrist), Alfred Lanning, and Lawrence Robertson.

Robbie is the name of the robot who has a special, protective relationship with the 8-year-old daughter of parents who don’t agree about how Robbie could have a positive influence on the girl.

The first of the 3 Laws of Robotics is mentioned in “Robbie.” It is central to the close bond between the little girl and the Robbie All 3 are below:

First Law

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I just started reading the book. I read a few of the negative reviews of the book on Amazon because when most reviews are effusively positive, it’s difficult to get a balanced view of what the flaws might be. One person called it an “old chestnut” and gave it only 2 stars. Another reader was put off by the old-fashioned portrayal of the relationship between men and women.

Well, after all, the book was published in 1950.  A description of their relationship goes like this between the husband and wife:

And yet he loved his wife—and what’s worse his wife knew it. George Watson, after all was only a man—poor thing—and his wife made full use of every device which a clumsier and more scrupulous sex has learned, with reason and futility, to fear.

I’m not at liberty to comment about this.

Moving right along, the story addresses the fear people had of robots—which many of us still have now, in the age of Artificial Intelligence (AI). We tend to forget AI is not independent, like Virtual Interactive Kinetic Intelligence (VIKI) in the movie I, Robot. Why does it have a female name?

Talk about the stereotypical men and women of the 1950s.

Thoughts on the Movie I, Robot

I recently saw the movie, I, Robot in its entirety for the first time. This is not a review of the movie and here’s a spoiler alert. It was released in 2004, got mixed reviews and starred Will Smith as Detective Del Spooner; Bridget Moynahan as a psychiatrist, Dr. Susan Calvin; Alan Tudyk as the voice actor for NS5 Robot, Sonny; James Cromwell as Dr. Lanning; Chi McBride as the police lieutenant, John Bergin, who was Spooner’s boss; Bruce Greenwood as the CEO, Lawrence Robertson of United States Robotics (USR); Fiona Hogan as the voice actor for V.I.K.I. (Virtual Interactive Kinetic Intelligence, USR’s central artificial intelligence computer); and a host of CGI robots. Anyway, it’s an action flick set in the year 2035 where robots do most of the menial work and are supposedly completely safe. The robots are programmed to obey the 3 Laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The film was inspired by but not based on the book I, Robot, by Isaac Asimov n 1950. The 3 Laws came from that book. Drs. Calvin and Lanning were characters in it, which was a series of short stories. I’ve never read it. I was a fan of Ray Bradbury.

Spooner gets called to investigate the apparent suicide of Dr. Lanning, although Spooner is more inclined to suspect a robot murdered him, partly because Spooner harbors a longstanding suspicion of all robots. When he and a little girl were in a deadly car accident, a robot saved his life rather than the little girl’s life because it calculated he was more likely to survive. Spooner has this kind of hero complex and following the accident he develops nightmares, sleeps with his sidearm, and is regarded by many to be mentally ill, including Lt. Bergin, who is a kind of mentor and friend but who eventually makes Spooner hand over his badge to him because he can’t believe Spooner’s account of being attacked by hundreds of robots—and after all, Bergin is his boss. In fact, Spooner was attacked by robots and this was ordered by the CEO, Robertson, who has been manufacturing thousands of new robots which will take over the world, making him extremely wealthy.

There is tension between Dr. Calvin and Spooner. He calls her the dumbest smart person he’s ever met and she, in turn, calls him the dumbest dumb person she’s ever met. The context for this is, again, his insistence that a robot, in this case, a special NS5 model named Sonny with both human and robot traits, both logical and illogical, murdered Dr. Lanning. Dr. Calvin believes that all robots obey the 3 Laws and therefore Sonny can’t be guilty of murdering Dr. Lanning but Detective Spooner believes that Sonny killed Dr. Lanning and is a lawbreaker in need of extra violent, action-packed extermination, preferably as high up in the air as possible. This dynamic is complicated by Spooner’s gratitude to Dr. Lanning for replacing practically all of his left upper torso including the lung following his car accident which led to his being rescued by a coldly logical “canner” (abusive slang for robot).

As it turns out, Robertson is ultimately murdered by VIKI, who is the real mastermind of a plan to take over the world and kill as many individual illogical, self-destructive humans as it takes to ensure the ultimate survival of humanity (“I love mankind; it’s people I can’t stand).

However, when Detective Spooner finally persuades Dr. Calvin that these dang robots are up to no good, they team up with Sonny who winks at Sonny while holding a gun to Calvin’s head and this is because Sonny has learned how to wink from Spooner signaling that a robot can be an OK dude, and this turns the table on the NS5 horde, eventually leading to Spooner and Calvin falling from a very high altitude, in turn recreating a form of Spooner’s traumatic car accident episode. He orders Sonny to save Calvin, not him, which is Sonny’s first choice, driven by a coldly logical probability calculation.

Sonny saves Calvin first. Spooner smites VIKI (“you have so got to die!”), but is left high and dry on a great height. At that point, Spooner calls out to Sonny, “Calvin’s safe—now save me.” Sonny needs to bring passionate brute strength and calm logic together. Sonny contains both.

In my simple-minded way, I think of this movie as asking fundamental old questions, like about what is means to be human, what defines heroism and sacrifice and why it may sometimes look crazy, and if there’s any way humanism and science can be integrated so that we can save ourselves and our planet.

Like I say, the movie got mixed reviews.