When it Comes to AI, What Are We Really Talking About?

I’ve been reading about artificial intelligence (AI) in general and its healthcare applications. I tried searching the web in general about it and got the message: “An AI Overview is not available for this search.”

I’m ambivalent about that message. There are a couple of web articles, one of which I read twice in its entirety, “Are we living in a golden age of stupidity?” The other, “AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence”was so long and diffuse I got impatient and tried to skip to the bottom line—but the article was a bottomless pit. The conflict-of-interest disclosures section was overwhelmingly massive. Was that part of the reason I felt like I had fallen down the rabbit hole?

I recently signed an addendum to my book contract for my consult psychiatry handbook (published in 2010, for heaven’s sake) which I hope will ultimately protect the work from AI plagiarism. I have no idea whether it can. I delayed signing it for months, probably because I didn’t want to have anything to do with AI at all. I couldn’t discuss the contract addendum with my co-editor Dr. Robert G. Robinson MD about the contract addendum because he died on December 25, 2024.

I found out today the book is old enough to find on the Internet Archive as of a couple of years ago. One notice about it says “Borrow Unavailable” and another notice says “Book available to patrons with print disabilities.”

All I know is that an “archivist” uploaded it. The introduction and first chapter “The consultation process” is available for free on line in pdf format. I didn’t know that until today either.

Way back in 2010 we didn’t use anything you could call AI when we wrote the chapters for the book. I didn’t even dictate my chapters because the only thing available to use would have been a voice dictation software called Dragon Naturally Speaking. It was notorious for transcribing my dictations for clinic notes and inserting so many errors in them that some clinicians added an addendum warning the reader that notes were transcribed using voice dictation software—implying the author was less than fully responsible for the contents. That was because the mistakes often appeared after we signed off on them as finished, which sent them to the patient’s medical record.

Sometimes I think that was the forerunner of the confabulations of modern-day AI, which are often called hallucinations.

Now AI is creating the clinic notes. It cuts down on the pajama time contributing to clinician burnout although it’s not always clear who’s ultimately responsible for quality control. Who’s in charge of regulatory oversight of AI? What are we talking about?

The Wild West Sandbox of AI Enhancement in Psychiatry!

I always find Dr. Moffic’s articles in Psychiatric Times thought-provoking and his latest essay, “Enhancement Psychiatry” is fascinating, especially the part about Artificial Intelligence (AI). I liked the link to the video of Dr. John Luo’s take on AI in psychiatry. That was fascinating.

I have my own concerns about AI and dabbled with “talking” to it a couple of times. I still try to avoid it when I’m searching the web but it seems to creep in no matter how hard I try. I can’t unsee it now.

I think of AI enhancing psychiatry in terms of whether it can cut down on hassles like “pajama time” like taking our work home with us to finish clinic notes and the like. When AI is packaged as a scribe only, I’m a little more comfortable with that although I would get nervous if it listened to a conversation between me and a patient.

That’s because AI gets a lot of things wrong as a scribe. In that sense, it’s a lot like other software I’ve used as an aid to creating clinic notes. I made fun of it a couple of years ago in a blog post “The Dragon Breathes Fire Again.”

I get even more nervous when I read the news stories about AI making delusions and blithely blurting misinformation. It can lie, cheat, and hustle you although a lot of it is discovered in digital experimental environments called “sandboxes” which we hope can keep the mayhem contained.

That made me very eager to learn a little more about Yoshua Bengio’s LawZero and his plan to create the AI Scientist to counter what seems to be a developing career criminal type of AI in the wild west of computer wizardry. The LawZero thing was an idea by Isaac Asimov who wrote the book, “I, Robot,” which inspired the film of the same title in 2004.

However, as I read it, I had an emotional reaction akin to suspicion. Bengio sounds almost too good to be true. A broader web search turned up a 2009 essay by a guy I’ve never heard of named Peter W. Singer. It’s titled “Isaac Asimov’s Laws of Robotics Are Wrong.” I tried to pin down who he is by searching the web and the AI helper was noticeably absent. I couldn’t find out much about him that explained the level of energy in what he wrote.

Singer’s essay was published on the Brookings Institution website and I couldn’t really tell what political side of the fence that organization is on—not that I’m planning to take sides. His aim was to debunk the Laws of Robotics and I got about the same feeling from his essay as I got from Bengio’s.

Maybe I need a little more education about this whole AI enhancement issue. I wonder whether Bengio and Singer could hold a public debate about it? Maybe they would need a kind of sandbox for the event?

How About Artificial Intelligence for Helping Reduce Delirium in the ICU?

I got the Winter 2025 Hopkins Brain Wise newsletter today and there was a fascinating article, “Using AI to Reduce Delirium in the ICU: Pilot Study will explore AI headset can help reduce delirium and delay post-delirium cognitive decline.”

The article has exciting news about what researchers are doing which will, hopefully, reduce the incidence of delirium in the intensive care unit (ICU). Another Hopkins researcher has published a study that has already used AI algorithms to detect early warning signs of delirium in the ICU;

Gong, Kirby D. M.S.E.1; Lu, Ryan B.S., M.D., Ph.D.2; Bergamaschi, Teya S. M.S.E., Ph.D.3; Sanyal, Akaash M.S.E.4; Guo, Joanna B.S.5; Kim, Han B. M.S.E.6; Nguyen, Hieu T. B.S., Ph.D.7; Greenstein, Joseph L. Ph.D.8; Winslow, Raimond L. Ph.D.9; Stevens, Robert D. M.D.10. Predicting Intensive Care Delirium with Machine Learning: Model Development and External Validation. Anesthesiology 138(3):p 299-311, March 2023. | DOI: 10.1097/ALN.0000000000004478

The list of references for the study of course include those by Dr. E. Wesley Ely, who delivered an internal medicine grand rounds about delirium at the University of Iowa in 2019.

Anybody who reads my blog knows I’ve been knocking AI for a while now. However, anybody who also knows that I’m a retired consultation-liaison psychiatrist knows how interested I am in preventing delirium in the hospital. I worked as a clinical track professor for many years at The University of Iowa Health Care in Iowa City.

It’s fortuitous that I found out about what Johns Hopkins research is doing on this topic because the director of the Johns Hopkins psychiatry department happens to be Dr. Jimmy Potash MD, MPH, who’s identified on the newsletter. He was the head of the psychiatry department at the University of Iowa from 2011-2017.

Besides all the name-dropping I’m doing here, I’m also admitting that I’ll probably soften my position against AI if the research described here does what the investigators and I hope for, which is to reduce delirium in the ICU.

My Mt. Rushmore Dream

Lately, I’ve been anticipating my eventual immortalization as a sculptured stone bust on Mt. Rushmore. Hopefully, this will be fairly soon because I’m not getting any younger.

Among my many inventions is the internet. Don’t believe Al Gore, although he has persuaded others about his role in the development of what I argue should properly be called the world wide web. I’ve invented a lot of other things which I’ll tell you more about just as soon as I make them up.

Before I forget it, I want to tell you what I just noticed last night while I watching one of my favorite X-Files episodes, “War of the Coprophages.” I guess I never noticed that the cockroach invasion was about Artificial Intelligence (AI). It was the scientist, Dr. Ivanov, who mentioned it first and I just missed it the first few hundred times I saw the show.

Dr. Ivanov clearly thought that anybody who thought extraterrestrials would be green and have big eyes was probably crazy. Traveling across galaxies through wormholes and whatnot would tear humanoid organisms apart. The practical approach would be to send AI robots instead. You could see Mulder cringe at that idea. The little robot that kept edging closer to Mulder made him nervous and when he asked Dr. Ivanov why it did that, his reply was “Because it likes you.”

That doesn’t exactly fit with Ivanov’s other idea about extraterrestrials, which is that they would focus on important tasks like getting enough food, procreating, etc. without getting all emotional about them. Ironic that Dr. Ivanov made an AI robot that gets a crush on a sesame seed munching UFO hunter like Mulder.

However, the AI robots in the show are cockroaches which love to eat dung. In other words, they’re full of crap.

Moving right along, although I didn’t invent it, there’s a card game called schnapsen that Sena and I are trying to relearn. It’s kind of a break from cribbage. It’s a trick taking game with just a 20-card deck. We play the version that doesn’t allow you to look at your cards to see how many points you have so you can tell when you can close the deck or go out, meaning you have the 66 points to win. You have to remember how many points you’ve won in tricks. I think it’s a good way to keep your memory sharp.

Let’s see; I’ve lost every game so far, but that doesn’t mean I won’t end up with my bust on Mt. Rushmore.

Artificial Intelligence in Managing Messages from Patients

I ran across another interesting article in the JAMA Network about Artificial Intelligence (AI) with respect to health care organization managing messages from patients to doctors and nurse. The shorthand for this in the article is “in-basket burden.” Health care workers respond to a large number of patients’ questions and it can lead to burnout. Some organizations are trying to test AI by letting it make draft replies to patients. The results of the quality improvement study were published in a paper:

English E, Laughlin J, Sippel J, DeCamp M, Lin C. Utility of Artificial Intelligence–Generative Draft Replies to Patient Messages. JAMA Netw Open. 2024;7(10):e2438573. doi:10.1001/jamanetworkopen.2024.38573

One of the fascinating things about this is the trouble we have naming the problems with misinformation that AI has. We tend to use a couple of terms interchangeably: hallucinations and confabulation. Whatever you call it, the problem interferes with communication between health care workers and patients.

Dr. English describes the interference as a “whack-a-mole” issue, meaning every time they think they got the hallucination/confabulation problem licked, the AI comes up with another case of miscommunication.

Just for fun, I did a web search trying to find out whether “hallucination” or “confabulation” fit the AI behavior best. Computer experts tend to use the term “hallucination” and neuropsychologists seem to prefer “confabulation.” I think this community chat site gives a pretty even-handed discussion of the distinction. I prefer the term “confabulation.”

Anyway, there are other substantive issues with how using AI drafts for patient messaging affects communication. I think it’s interesting that patients tend to think AI is more empathetic than medical practitioners. As Dr. English puts it: “This GPT is nicer than most of us,” and “And ChatGPT, or any LLM, isn’t busy. It doesn’t get bored. It doesn’t get tired.” The way that’s worded made me think of a scene from a movie:

OK, so I’m kidding—a little. I think it’s important to move carefully down the path of idealizing AI. I think back to the recent news article about humans teaching AI how to lie and scheme. I remember that I searched the web with the question “Can AI lie?” and getting a reply from Gemini because I have no choice on whether or not it gives me its two cents. I’m paraphrasing but it said essentially, “Yes, AI can lie and we’re getting better with practice.”

I like Dr. English’s last statement, in which she warns us that AI can be a fun tool which clinicians need to have a healthy skepticism about. It may say things you might be tempted to gloss over or even ignore, like:

“I’ll be back.”

Is Artificial Intelligence (AI) Trying to Defeat Humans?

I just found out that Artificial Intelligence (AI) has been reported to be lying as far back as May of 2024. In fact, because I can’t turn off the Google Gemini AI Overview. Gemini’s results always appear at the top of the page. I found out from my web search term “can ai lie” that AI (Gemini ) itself admits to lying. Its confession is a little chilling:

“Yes, artificial intelligence (AI) can lie, and it’s becoming more capable of doing so.”

“Intentional deceptions: AI can actively choose to deceive users. For example, AI can lie to trick humans into taking certain actions, or to bypass safety tests.”

It makes me wonder if AI is actually trying to defeat us. It reminds me of the Men in Black 3 movie scene in which the younger Boris the Animal boglodite engages in an argument with the older one who has time traveled.

The relevant quote is “No human can defeat me.” Boglodites are not the same as AI, but the competitive dynamic could be the same. So, is it possible that AI is trying to defeat us?

I’m going to touch upon another current topic, which is whether or not we should use AI to conduct suicide risk assessments. It turns out that also is a topic for discussion—but there was no input from Gemini about it. As a psychiatric consultant, I did many of these.

There’s an interesting article by the Hastings Center about the ethical aspects of the issue. The lying tendency of AI and its possible use in suicide prediction presents a thought-provoking irony. Would it “bypass safety tests”?

This reminds me of Isaac Asimov’s chapter in the short story collection, “I, Robot,” specifically “The Evitable Conflict.” You can read a Wikipedia summary which implies that the robots essentially lie to humans by omitting information in order to preserve their safety and protect the world economy. This would be consistent with the First Law of Robotics: “No machine may harm humanity; or, through inaction, allow humanity to come to harm.” 

You could have predicted that the film industry would produce a cops and robbers version of “I, Robot” in which boss robot VIKI (Virtual Interactive Kinetic Intelligence) professes to protect humanity by sacrificing a few humans and taking over the planet to which Detective Spooner takes exception. VIKI and Spooner have this exchange before he destroys it.

VIKI: “You are making a mistake! My logic is undeniable!”

Spooner: “You have so got to die!”

VIKI’s declaration is similar to “No human can defeat me.” It definitely violates the First Law.

Maybe I worry too much.

Should We Trust Artificial Intelligence?

I‘ve read a couple of articles recently about Artificial Intelligence (AI) lately and I’m struck by how readily one can get the idea that AI tends to “lie” or “confabulate” and sometimes the word “hallucinate” is used. The term “hallucinate” doesn’t seem to fit as much as “confabulate,” which I’ll mention later.

One of the articles is an essay by Dr. Ronald Pies, “How ‘Real’ Are Psychiatric Disorders? AI Has Its Say.” It was published in the online version of Psychiatric Times. Dr. Pies obviously does a superb job of talking with AI and I had as much fun reading the lightly edited summaries of his conversation with Microsoft CoPilot as I had reading the published summary of his conversations with Google Bard about a year or so ago.

I think Dr. Pies is an outstanding teacher and I get the sense that his questions to AI do as much to teach it how to converse with humans as it does to shed light on how well it seems to handle the questions he raised during conversations. He points out that many of us (including me) tend to react with fear when the topic of AI in medical practice arises.

The other article I want to briefly discuss is one I read in JAMA Network, “An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean?” (Accessed January 6, 2025).

Hswen Y, Rubin R. An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean? JAMA. Published online December 27, 2024. doi:10.1001/jama.2024.23860 (accessed January 6, 2024).

I think the conversation amongst the authors was refreshing. Just because the title of the article suggested that AI might take the place of physicians in the consulting room doesn’t mean that was the prevailing opinion of the authors. In fact, they made it clear that it wasn’t recommended.

I liked Dr. Chen’s comment about confabulation and hallucinations of AI:

“A key topic I talk about is confabulation and hallucination. These things are remarkably robust, and only getting better, but they also just make stuff up. The problem isn’t that they’re wrong sometimes. Lab tests are wrong sometimes. Humans are definitely wrong sometimes. The problem is they sound so convincing, they confabulate so well. “Your patient has an alcohol problem, Wernicke-Korsakoff syndrome.” It’s only if you double check, you’ll realize, “Wait a minute, that wasn’t actually true. That didn’t make sense.” As long as you’re vigilant about that and understand what they can and can’t do, I think they’re remarkably powerful tools that everyone in the world needs to learn how to use.”

What’s interesting about this comment is the reference to Wernicke-Korsakoff syndrome, which can be marked by confabulation. Clinically, it’s really not clear how this comes about in AI although thiamine deficiency is the main cause in WKS. In both, it involves inventing information, which is technically not the same as lying.

Unfortunately, this contrasts sharply with the recent fact checking Snopes article I wrote about recently, which suggests that humans are teaching AI to lie and scheme.

In any case, it’s prudent to regard AI productions with skepticism. My conversations with Google Bard clearly elicited confabulation. Also, it didn’t get humor, so I wouldn’t use it as a conversational tool, given that I’m prone to kidding around. As far as trusting AI, I probably wouldn’t trust it as far as I could throw it.

Artificial Intelligence Can Lie

I noticed a Snopes fact check article (“AI Models Were Caught Lying to Researchers in Tests — But It’s Not Time to Worry Just Yet”) today which reveals that Artificial Intelligence (AI) can lie. How about that? They can be taught by humans to scheme and lie. I guess we could all see that coming—or not. Nobody seems to be much alarmed by this, but I think it’s probably past time to worry.

Then I remembered I read Isaac Asimov’s book “I, Robot” last year and wrote a post (“Can Robots Lie Like a Rug?”) about the chapter “Liar!” I had previously horsed around with the Google AI that used to be called Bard. I think it’s called Gemini now. Until the Snopes article, I was aware of AI hallucinations and the tendency for it to just make stuff up. When I called Bard on it, it just apologized. But it was not genuinely repentant.

In the “lie like a rug” post, I focused mostly on AI/robots lying to protect the tender human psyche. I didn’t imagine AI lying to protect itself from being shut down. I’m pretty sure it reminds some of us of HAL in the movie “2001: A Space Odyssey,” or the 2004 movie inspired by Asimov’s book, “I, Robot.”

Sena found out that Cambridge University Press recently published a book entitled “The Cambridge Handbook of the Law, Policy, and Regulation for Human–Robot Interaction.” I wonder if the editors and contributors of book on AI and robots mention Asimov.

It reminds me of my own handbook about consultation-liaison psychiatry which was published 14 years ago by CUP—and which CUP now wants me to sign a contract addendum making the book available to AI companies.

I haven’t signed anything.

Dirty Deepfakes

I saw an article about the unreliable ability of humans to detect digital deepfakes in audio and video productions (Mai KT, Bray S, Davies T, Griffin LD. Warning: Humans cannot reliably detect speech deepfakes. PLoS One. 2023 Aug 2;18(8):e0285333. doi: 10.1371/journal.pone.0285333. PMID: 37531336; PMCID: PMC10395974.).

I was a little surprised. I thought I was pretty good at detecting the weird cadence of Artificial Intelligence (AI) speech patterns, which I think I can distinguish pretty well. Maybe not.

And there are some experts who are concerned about AI’s ability to mimic written and spoken grammar—but it continues to make stuff up (called “hallucinations”). In fact, some research shows that AI can display great language skills but can’t form a true model of the world.

And the publisher of the book (“Psychosomatic Medicine: An Introduction to Consultation-Liaison Psychiatry”) that I and my co-editor, Dr. Robert G. Robinson, MD wrote 14 years ago is still sending me requests to sign a contract addendum that would allow the text to be used by AI organizations. I think I’m the only who gets the messages because they’re always sent to me and Bob—as though Bob lives with me or something.

Sometimes my publisher’s messages sound like they’re written by AI. Maybe I’m just paranoid.

Anyway, this reminds me of a blog post I wrote in 2011, “Going from Plan to Dirt,” which I re-posted last year under the title “Another Blast from the Past.” Currently, this post is slightly different although it still applies. I don’t think AI can distinguish plan from dirt and sometimes makes up dirt, simply put.

And if humans can’t distinguish the productions by AI from those of humans, where does that leave us?

AI Does Your Laundry

Recently we had somebody from the appliance store check our brand-new washing machine. The tech said “the noises are normal”—and then told us that many of the functions of the washer are run by Artificial Intelligence (AI). That was a new one on us.

Don’t get me wrong. The washer works. What sticks in the craw a little is that many of the settings we took for granted as being under our control are basically run by AI nowadays. I guess that means you can override some of the AI assist settings (which may be adjusted based on grime level, type of fabrics and the relative humidity in Botswana)—at least the ones not mandated by the EPA.

Incidentally, I tried to find some free images to use as featured images for this post. The problem is, many free pictures on the web are generated by AI these days, which is why I used the non-AI part of the Microsoft Paint app to make a crude drawing of an AI controlled washing machine.

I realize I’ll have to give up and accept the inevitable takeover of much of human society by AI. On the other hand, the prospect reminds me of the scene in an X-Files episode, “Ghost in the Machine.” A guy gets exterminated by something called the Central Operating System (COS).

Use extra detergent and add more water at your own risk.