When it Comes to AI, What Are We Really Talking About?

I’ve been reading about artificial intelligence (AI) in general and its healthcare applications. I tried searching the web in general about it and got the message: “An AI Overview is not available for this search.”

I’m ambivalent about that message. There are a couple of web articles, one of which I read twice in its entirety, “Are we living in a golden age of stupidity?” The other, “AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence”was so long and diffuse I got impatient and tried to skip to the bottom line—but the article was a bottomless pit. The conflict-of-interest disclosures section was overwhelmingly massive. Was that part of the reason I felt like I had fallen down the rabbit hole?

I recently signed an addendum to my book contract for my consult psychiatry handbook (published in 2010, for heaven’s sake) which I hope will ultimately protect the work from AI plagiarism. I have no idea whether it can. I delayed signing it for months, probably because I didn’t want to have anything to do with AI at all. I couldn’t discuss the contract addendum with my co-editor Dr. Robert G. Robinson MD about the contract addendum because he died on December 25, 2024.

I found out today the book is old enough to find on the Internet Archive as of a couple of years ago. One notice about it says “Borrow Unavailable” and another notice says “Book available to patrons with print disabilities.”

All I know is that an “archivist” uploaded it. The introduction and first chapter “The consultation process” is available for free on line in pdf format. I didn’t know that until today either.

Way back in 2010 we didn’t use anything you could call AI when we wrote the chapters for the book. I didn’t even dictate my chapters because the only thing available to use would have been a voice dictation software called Dragon Naturally Speaking. It was notorious for transcribing my dictations for clinic notes and inserting so many errors in them that some clinicians added an addendum warning the reader that notes were transcribed using voice dictation software—implying the author was less than fully responsible for the contents. That was because the mistakes often appeared after we signed off on them as finished, which sent them to the patient’s medical record.

Sometimes I think that was the forerunner of the confabulations of modern-day AI, which are often called hallucinations.

Now AI is creating the clinic notes. It cuts down on the pajama time contributing to clinician burnout although it’s not always clear who’s ultimately responsible for quality control. Who’s in charge of regulatory oversight of AI? What are we talking about?

The Wild West Sandbox of AI Enhancement in Psychiatry!

I always find Dr. Moffic’s articles in Psychiatric Times thought-provoking and his latest essay, “Enhancement Psychiatry” is fascinating, especially the part about Artificial Intelligence (AI). I liked the link to the video of Dr. John Luo’s take on AI in psychiatry. That was fascinating.

I have my own concerns about AI and dabbled with “talking” to it a couple of times. I still try to avoid it when I’m searching the web but it seems to creep in no matter how hard I try. I can’t unsee it now.

I think of AI enhancing psychiatry in terms of whether it can cut down on hassles like “pajama time” like taking our work home with us to finish clinic notes and the like. When AI is packaged as a scribe only, I’m a little more comfortable with that although I would get nervous if it listened to a conversation between me and a patient.

That’s because AI gets a lot of things wrong as a scribe. In that sense, it’s a lot like other software I’ve used as an aid to creating clinic notes. I made fun of it a couple of years ago in a blog post “The Dragon Breathes Fire Again.”

I get even more nervous when I read the news stories about AI making delusions and blithely blurting misinformation. It can lie, cheat, and hustle you although a lot of it is discovered in digital experimental environments called “sandboxes” which we hope can keep the mayhem contained.

That made me very eager to learn a little more about Yoshua Bengio’s LawZero and his plan to create the AI Scientist to counter what seems to be a developing career criminal type of AI in the wild west of computer wizardry. The LawZero thing was an idea by Isaac Asimov who wrote the book, “I, Robot,” which inspired the film of the same title in 2004.

However, as I read it, I had an emotional reaction akin to suspicion. Bengio sounds almost too good to be true. A broader web search turned up a 2009 essay by a guy I’ve never heard of named Peter W. Singer. It’s titled “Isaac Asimov’s Laws of Robotics Are Wrong.” I tried to pin down who he is by searching the web and the AI helper was noticeably absent. I couldn’t find out much about him that explained the level of energy in what he wrote.

Singer’s essay was published on the Brookings Institution website and I couldn’t really tell what political side of the fence that organization is on—not that I’m planning to take sides. His aim was to debunk the Laws of Robotics and I got about the same feeling from his essay as I got from Bengio’s.

Maybe I need a little more education about this whole AI enhancement issue. I wonder whether Bengio and Singer could hold a public debate about it? Maybe they would need a kind of sandbox for the event?

My Mt. Rushmore Dream

Lately, I’ve been anticipating my eventual immortalization as a sculptured stone bust on Mt. Rushmore. Hopefully, this will be fairly soon because I’m not getting any younger.

Among my many inventions is the internet. Don’t believe Al Gore, although he has persuaded others about his role in the development of what I argue should properly be called the world wide web. I’ve invented a lot of other things which I’ll tell you more about just as soon as I make them up.

Before I forget it, I want to tell you what I just noticed last night while I watching one of my favorite X-Files episodes, “War of the Coprophages.” I guess I never noticed that the cockroach invasion was about Artificial Intelligence (AI). It was the scientist, Dr. Ivanov, who mentioned it first and I just missed it the first few hundred times I saw the show.

Dr. Ivanov clearly thought that anybody who thought extraterrestrials would be green and have big eyes was probably crazy. Traveling across galaxies through wormholes and whatnot would tear humanoid organisms apart. The practical approach would be to send AI robots instead. You could see Mulder cringe at that idea. The little robot that kept edging closer to Mulder made him nervous and when he asked Dr. Ivanov why it did that, his reply was “Because it likes you.”

That doesn’t exactly fit with Ivanov’s other idea about extraterrestrials, which is that they would focus on important tasks like getting enough food, procreating, etc. without getting all emotional about them. Ironic that Dr. Ivanov made an AI robot that gets a crush on a sesame seed munching UFO hunter like Mulder.

However, the AI robots in the show are cockroaches which love to eat dung. In other words, they’re full of crap.

Moving right along, although I didn’t invent it, there’s a card game called schnapsen that Sena and I are trying to relearn. It’s kind of a break from cribbage. It’s a trick taking game with just a 20-card deck. We play the version that doesn’t allow you to look at your cards to see how many points you have so you can tell when you can close the deck or go out, meaning you have the 66 points to win. You have to remember how many points you’ve won in tricks. I think it’s a good way to keep your memory sharp.

Let’s see; I’ve lost every game so far, but that doesn’t mean I won’t end up with my bust on Mt. Rushmore.

Artificial Intelligence in Managing Messages from Patients

I ran across another interesting article in the JAMA Network about Artificial Intelligence (AI) with respect to health care organization managing messages from patients to doctors and nurse. The shorthand for this in the article is “in-basket burden.” Health care workers respond to a large number of patients’ questions and it can lead to burnout. Some organizations are trying to test AI by letting it make draft replies to patients. The results of the quality improvement study were published in a paper:

English E, Laughlin J, Sippel J, DeCamp M, Lin C. Utility of Artificial Intelligence–Generative Draft Replies to Patient Messages. JAMA Netw Open. 2024;7(10):e2438573. doi:10.1001/jamanetworkopen.2024.38573

One of the fascinating things about this is the trouble we have naming the problems with misinformation that AI has. We tend to use a couple of terms interchangeably: hallucinations and confabulation. Whatever you call it, the problem interferes with communication between health care workers and patients.

Dr. English describes the interference as a “whack-a-mole” issue, meaning every time they think they got the hallucination/confabulation problem licked, the AI comes up with another case of miscommunication.

Just for fun, I did a web search trying to find out whether “hallucination” or “confabulation” fit the AI behavior best. Computer experts tend to use the term “hallucination” and neuropsychologists seem to prefer “confabulation.” I think this community chat site gives a pretty even-handed discussion of the distinction. I prefer the term “confabulation.”

Anyway, there are other substantive issues with how using AI drafts for patient messaging affects communication. I think it’s interesting that patients tend to think AI is more empathetic than medical practitioners. As Dr. English puts it: “This GPT is nicer than most of us,” and “And ChatGPT, or any LLM, isn’t busy. It doesn’t get bored. It doesn’t get tired.” The way that’s worded made me think of a scene from a movie:

OK, so I’m kidding—a little. I think it’s important to move carefully down the path of idealizing AI. I think back to the recent news article about humans teaching AI how to lie and scheme. I remember that I searched the web with the question “Can AI lie?” and getting a reply from Gemini because I have no choice on whether or not it gives me its two cents. I’m paraphrasing but it said essentially, “Yes, AI can lie and we’re getting better with practice.”

I like Dr. English’s last statement, in which she warns us that AI can be a fun tool which clinicians need to have a healthy skepticism about. It may say things you might be tempted to gloss over or even ignore, like:

“I’ll be back.”

Could Artificial Intelligence Help Clinicians Conduct Suicide Risk Assessments?

I found an article in JAMA Network (Medical News & Perspectives) the other day which discussed a recent study on the use of Artificial Intelligence (AI) in suicide risk assessment (Hswen Y, Abbasi J. How AI Could Help Clinicians Identify American Indian Patients at Risk for Suicide. JAMA. Published online January 10, 2025. doi:10.1001/jama.2024.24063).

I’ve published several posts expressing my objections to AI in medicine. On the other hand, I did a lot of suicide risk assessments during my career as a psychiatric consultant in the general hospital. I appreciated the comments made by one of the co-authors, Emily E. Haroz, PhD (see link above).

Dr. Haroz preferred the term “risk assessment” rather than “prediction” referring to the study (Haroz EE, Rebman P, Goklish N, et al. Performance of Machine Learning Suicide Risk Models in an American Indian Population. JAMA Netw Open. 2024;7(10):e2439269. doi:10.1001/jamanetworkopen.2024.39269).

The model used for the AI input used data available to clinicians in patient charts. The charts can be very large and it makes sense to apply computers to search them for the variables that can be linked to suicide risk. What impressed me most was the admission that AI alone can’t solve the problem of suicide risk assessment. Clinicians, administrators, and community case managers all have to be involved.

The answer to the question “How do you know when someone’s at high risk?” was that the patient was crying. Dr. Haroz points out that AI probably can’t detect that.

That reminded me of Dr. Igor Galynker, who has published a lot about how to assess for high risk of suicide. His work on the suicide crisis syndrome is well known and you can check out his website at the Icahn School of Medicine at Mount Sinai. I still remember my first “encounter” with him, which you can read about here.

His checklist for the suicide crisis syndrome is available on his website and he’s published a book about as well, “The Suicidal Crisis: Clinical Guide to the Assessment of Imminent Suicide Risk 2nd Edition”. There is also a free access article about it on the World Psychiatry journal website.

Although I have reservations about the involvement of AI in medicine, I have to admit that computers can do some things better than humans. There may be a role for AI in suicide risk assessment, and I wonder if Dr. Galynker’s work could be part of the process used to teach AI about it.

Is Artificial Intelligence (AI) Trying to Defeat Humans?

I just found out that Artificial Intelligence (AI) has been reported to be lying as far back as May of 2024. In fact, because I can’t turn off the Google Gemini AI Overview. Gemini’s results always appear at the top of the page. I found out from my web search term “can ai lie” that AI (Gemini ) itself admits to lying. Its confession is a little chilling:

“Yes, artificial intelligence (AI) can lie, and it’s becoming more capable of doing so.”

“Intentional deceptions: AI can actively choose to deceive users. For example, AI can lie to trick humans into taking certain actions, or to bypass safety tests.”

It makes me wonder if AI is actually trying to defeat us. It reminds me of the Men in Black 3 movie scene in which the younger Boris the Animal boglodite engages in an argument with the older one who has time traveled.

The relevant quote is “No human can defeat me.” Boglodites are not the same as AI, but the competitive dynamic could be the same. So, is it possible that AI is trying to defeat us?

I’m going to touch upon another current topic, which is whether or not we should use AI to conduct suicide risk assessments. It turns out that also is a topic for discussion—but there was no input from Gemini about it. As a psychiatric consultant, I did many of these.

There’s an interesting article by the Hastings Center about the ethical aspects of the issue. The lying tendency of AI and its possible use in suicide prediction presents a thought-provoking irony. Would it “bypass safety tests”?

This reminds me of Isaac Asimov’s chapter in the short story collection, “I, Robot,” specifically “The Evitable Conflict.” You can read a Wikipedia summary which implies that the robots essentially lie to humans by omitting information in order to preserve their safety and protect the world economy. This would be consistent with the First Law of Robotics: “No machine may harm humanity; or, through inaction, allow humanity to come to harm.” 

You could have predicted that the film industry would produce a cops and robbers version of “I, Robot” in which boss robot VIKI (Virtual Interactive Kinetic Intelligence) professes to protect humanity by sacrificing a few humans and taking over the planet to which Detective Spooner takes exception. VIKI and Spooner have this exchange before he destroys it.

VIKI: “You are making a mistake! My logic is undeniable!”

Spooner: “You have so got to die!”

VIKI’s declaration is similar to “No human can defeat me.” It definitely violates the First Law.

Maybe I worry too much.

Artificial Intelligence Can Lie

I noticed a Snopes fact check article (“AI Models Were Caught Lying to Researchers in Tests — But It’s Not Time to Worry Just Yet”) today which reveals that Artificial Intelligence (AI) can lie. How about that? They can be taught by humans to scheme and lie. I guess we could all see that coming—or not. Nobody seems to be much alarmed by this, but I think it’s probably past time to worry.

Then I remembered I read Isaac Asimov’s book “I, Robot” last year and wrote a post (“Can Robots Lie Like a Rug?”) about the chapter “Liar!” I had previously horsed around with the Google AI that used to be called Bard. I think it’s called Gemini now. Until the Snopes article, I was aware of AI hallucinations and the tendency for it to just make stuff up. When I called Bard on it, it just apologized. But it was not genuinely repentant.

In the “lie like a rug” post, I focused mostly on AI/robots lying to protect the tender human psyche. I didn’t imagine AI lying to protect itself from being shut down. I’m pretty sure it reminds some of us of HAL in the movie “2001: A Space Odyssey,” or the 2004 movie inspired by Asimov’s book, “I, Robot.”

Sena found out that Cambridge University Press recently published a book entitled “The Cambridge Handbook of the Law, Policy, and Regulation for Human–Robot Interaction.” I wonder if the editors and contributors of book on AI and robots mention Asimov.

It reminds me of my own handbook about consultation-liaison psychiatry which was published 14 years ago by CUP—and which CUP now wants me to sign a contract addendum making the book available to AI companies.

I haven’t signed anything.

Artificial Intelligence: The University of Iowa Chat From Old Cap

This is just a quick follow-up which will allow me to clarify a few things about Artificial Intelligence (AI) in medicine at the University of Iowa, compared with my take on it based on my impressions of the Rounding@Iowa presentation recently. Also, prior to my writing this post, Sena and I had a spirited conversation about how much we are annoyed by our inability to, in her words, “dislodge AI” from our internet searches.

First of all, I should say that my understanding of the word “ambient” as used by Dr. Misurac was flawed, probably because I assumed it meant a specific company name. I found out that it’s often used as a term to describe how AI listens in the background to a clinic interview between clinician and patient. This is to enable the clinician to sit with the patient so they can interact with each other more naturally in real time, face to face.

Further, in this article about AI at the University of Iowa, Dr. Misurac identified the companies involved by name as Evidently and Nabla.

The other thing I want to do in this post is to highlight the YouTube presentation “AI Impact on Healthcare | The University of Iowa Chat From the Old Cap.” I think this is a fascinating discussion led by leaders in patient care, research, and teaching as they relate to the influence of AI.

This also allows me to say how much I appreciated learning from Dr. Lauris Kaldjian during my time working as a psychiatric consultant in the general hospital at University of Iowa Health Care. I respect his judgment very much and I hope you’ll see why. You can read more about his thoughts in this edition of Iowa Magazine.

“There must be constant navigation and negotiation to determine if this is for the good of patients. And the good of patients will continue to depend on clinicians who can demonstrate virtues like compassion, honesty, courage, and practical wisdom, which are characteristics of persons, not computers.” ——Lauris Kaldjian, director of the Carver College of Medicine’s Program in Bioethics and Humanities

That Donut Song by Washboard Sam

I got a kick out of a song by Catfish Keith last night on the Big Mo Blues Show on KCCK radio. It was “Who Pumped the Wind in My Doughnut.” He always sings songs with lyrics that I mostly don’t understand and that was one of them, at first. I’ll give you a hint; it’s not a Christmas tune. Catfish Keith covers some old-time blues songs and this one is for adults only.

Judging from the title of the song and some of the lyrics, you might guess it’s about doughnuts but it’s not. Don’t bother with the Artificial Intelligence (AI) description, which I did not ask for. AI just pops up in a web search whether you want it to or not:

“Who pumped the wind in my doughnut” is a playful, nonsensical phrase meaning someone has exaggerated or inflated a situation or story to make it seem much bigger than it really is; essentially, they’ve added unnecessary drama or hype to something, like adding air to a doughnut to make it appear larger.”

Once again, we see that AI makes stuff up as it goes along, creating a little story which is really concrete and far from the truth about something for which it was not programmed—to process language that is not literal but a form of humor riddled with innuendo to express something about sexual infidelity, in this case resulting in a lot of children which don’t resemble the singer because they aren’t his.

Anyway, I found a little background on the song which was originally performed by a guy called Washboard Sam (born Robert Clifford Brown). He was a blues artist in the 1930s. He performed “Who Pumped the Hole in My Doughnut” under the name Ham Gravy. I found a reference which says that Washboard Sam performed it and Robert Brown wrote it. And I found another which shows a picture of the actual record which has the name Johnny Wilson on it with the name Ham Gravy just below it. I don’t know whether Johnny Wilson was just another pseudonym. You can find the lyrics of the song identifying it as being by “Washboard Sam via Johnny Wilson.”

You can find a mini-biography about Robert Brown on, of all things, a WordPress blog called The Fried Dough Ho. It has a fair number of posts about doughnuts too. The author knows the song is not about doughnuts. There are also some pretty comical impressions in a blog post entitled “What is he talking about?” regarding the meaning of the lyrics of the song on a Blogger site called The things I think about, when I wish I were sleeping. One of the comments is fairly recent, from 2023. You can also find a Wikipedia biography.

You may never feel the same about doughnuts.

Rounding At Iowa Podcast: “The Promises of Artificial Intelligence in Medicine”

I listened to the recent Rounding@Iowa podcast “The Promises of Artificial Intelligence in Medicine.” You can listen to it below. Those who read my blog already know I’m cautious and probably prejudiced against it, especially if you’ve read any of my posts about AI.

I was a little surprised at how enthusiastic Dr. Gerry Clancy sounded about AI. I expected his guest, Dr. Jason Misurac, to sound that way. I waited for Gerry to mention the hallucinations that AI can sometimes produce. Neither he nor Dr. Misurac said anything about them.

Dr. Misurac mentioned what I think is the Ambient AI tools that clinicians can use to make clinic note writing and chart reviews easier. I think he was referring to the company called Ambience.

I remember using the Dragon Naturally Speaking (which was not using AI technology at the time; see my post “The Dragon Breathes Fire Again”) speech to text disaster I tried to use years ago to write clinical notes when I was practicing consultation-liaison psychiatry. It was a disaster and I realize I’m prejudiced against any technology that would make the kind of mistakes that technology was prone to.

But more importantly, I’m concerned about the kind of mistakes AI made when I experimented with Google Bard on my blog (see posts entitled “How’s It Hanging Bard?” and “Update to Chat with Bard” in April of 2023.

That reminds me that I’ve seen the icon for AI assistant lurking around my blog recently. I’ve tried to ignore it but I can’t unsee it. I was planning to let the AI assistant have a stab at editing this post so you and I can see what happens. However, I just read the AI Guidelines (which everyone should do), and it contains one warning which concerned me:

We don’t claim any ownership over the content you generate with our AI features. Please note that you might not have complete ownership over the generated content either! For example, the content generated by AI may be similar to others’ content, including content that may be protected by trademark or copyright; and copyright ownership of generative AI outputs may vary around the world.”

That is yet another reason why I’m cautious about using AI.

86: Cancer Rates in Iowa Rounding@IOWA

Iowa's cancer rates are among the highest in the country, and they are rising. In this episode of Rounding@Iowa, Dr. Gerry Clancy and guest experts Dr. Mary Charlton and Dr. Mark Burkard discuss the data, risk factors, and prevention strategies clinicians can use to make a difference. CME Credit Available:  https://uiowa.cloud-cme.com/course/courseoverview?P=0&EID=81274  Host: Gerard Clancy, MD Senior Associate Dean for External Affairs Professor of Psychiatry and Emergency Medicine University of Iowa Carver College of Medicine Guests: Mark E. Burkard, MD, PhD Professor of Internal Medicine-Hematology, Oncology, and Blood and Marrow Transplantation University of Iowa Carver College of Medicine Director, University of Iowa Health Care Holden Comprehensive Cancer Center Mary Charlton, PhD Professor of Epidemiology Director, Iowa Cancer Registry Iowa College of Public Health Financial Disclosures:  Dr. Clancy, Dr. Burkard, Dr. Charlton, and Rounding@IOWA planning committee members have disclosed no relevant financial relationships. Nurse: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this activity for a maximum of 0.75 ANCC contact hour. Pharmacist and Pharmacy Tech: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this knowledge-based activity for a maximum of 0.75 ACPE contact hours. Credit will be uploaded to the NABP CPE Monitor within 60 days after the activity completion. Pharmacists must provide their NABP ID and DOB (MMDD) to receive credit. UAN: JA0000310-0000-25-090-H99 Physician: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this enduring material for a maximum of 0.75 AMA PRA Category 1 CreditTM. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Other Health Care Providers: A certificate of completion will be available after successful completion of the course. (It is the responsibility of licensees to determine if this continuing education activity meets the requirements of their professional licensure board.) References/Resources:  Iowa Cancer Plan  
  1. 86: Cancer Rates in Iowa
  2. 85: Solutions for Rural Health Workforce Shortages
  3. 84: When to Suspect Atypical Recreational Substances
  4. 83: Hidradenitis Suppurativa
  5. 82: End-of-Life Doulas