When it Comes to AI, What Are We Really Talking About?

I’ve been reading about artificial intelligence (AI) in general and its healthcare applications. I tried searching the web in general about it and got the message: “An AI Overview is not available for this search.”

I’m ambivalent about that message. There are a couple of web articles, one of which I read twice in its entirety, “Are we living in a golden age of stupidity?” The other, “AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence”was so long and diffuse I got impatient and tried to skip to the bottom line—but the article was a bottomless pit. The conflict-of-interest disclosures section was overwhelmingly massive. Was that part of the reason I felt like I had fallen down the rabbit hole?

I recently signed an addendum to my book contract for my consult psychiatry handbook (published in 2010, for heaven’s sake) which I hope will ultimately protect the work from AI plagiarism. I have no idea whether it can. I delayed signing it for months, probably because I didn’t want to have anything to do with AI at all. I couldn’t discuss the contract addendum with my co-editor Dr. Robert G. Robinson MD about the contract addendum because he died on December 25, 2024.

I found out today the book is old enough to find on the Internet Archive as of a couple of years ago. One notice about it says “Borrow Unavailable” and another notice says “Book available to patrons with print disabilities.”

All I know is that an “archivist” uploaded it. The introduction and first chapter “The consultation process” is available for free on line in pdf format. I didn’t know that until today either.

Way back in 2010 we didn’t use anything you could call AI when we wrote the chapters for the book. I didn’t even dictate my chapters because the only thing available to use would have been a voice dictation software called Dragon Naturally Speaking. It was notorious for transcribing my dictations for clinic notes and inserting so many errors in them that some clinicians added an addendum warning the reader that notes were transcribed using voice dictation software—implying the author was less than fully responsible for the contents. That was because the mistakes often appeared after we signed off on them as finished, which sent them to the patient’s medical record.

Sometimes I think that was the forerunner of the confabulations of modern-day AI, which are often called hallucinations.

Now AI is creating the clinic notes. It cuts down on the pajama time contributing to clinician burnout although it’s not always clear who’s ultimately responsible for quality control. Who’s in charge of regulatory oversight of AI? What are we talking about?

Should We Trust Artificial Intelligence?

I‘ve read a couple of articles recently about Artificial Intelligence (AI) lately and I’m struck by how readily one can get the idea that AI tends to “lie” or “confabulate” and sometimes the word “hallucinate” is used. The term “hallucinate” doesn’t seem to fit as much as “confabulate,” which I’ll mention later.

One of the articles is an essay by Dr. Ronald Pies, “How ‘Real’ Are Psychiatric Disorders? AI Has Its Say.” It was published in the online version of Psychiatric Times. Dr. Pies obviously does a superb job of talking with AI and I had as much fun reading the lightly edited summaries of his conversation with Microsoft CoPilot as I had reading the published summary of his conversations with Google Bard about a year or so ago.

I think Dr. Pies is an outstanding teacher and I get the sense that his questions to AI do as much to teach it how to converse with humans as it does to shed light on how well it seems to handle the questions he raised during conversations. He points out that many of us (including me) tend to react with fear when the topic of AI in medical practice arises.

The other article I want to briefly discuss is one I read in JAMA Network, “An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean?” (Accessed January 6, 2025).

Hswen Y, Rubin R. An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean? JAMA. Published online December 27, 2024. doi:10.1001/jama.2024.23860 (accessed January 6, 2024).

I think the conversation amongst the authors was refreshing. Just because the title of the article suggested that AI might take the place of physicians in the consulting room doesn’t mean that was the prevailing opinion of the authors. In fact, they made it clear that it wasn’t recommended.

I liked Dr. Chen’s comment about confabulation and hallucinations of AI:

“A key topic I talk about is confabulation and hallucination. These things are remarkably robust, and only getting better, but they also just make stuff up. The problem isn’t that they’re wrong sometimes. Lab tests are wrong sometimes. Humans are definitely wrong sometimes. The problem is they sound so convincing, they confabulate so well. “Your patient has an alcohol problem, Wernicke-Korsakoff syndrome.” It’s only if you double check, you’ll realize, “Wait a minute, that wasn’t actually true. That didn’t make sense.” As long as you’re vigilant about that and understand what they can and can’t do, I think they’re remarkably powerful tools that everyone in the world needs to learn how to use.”

What’s interesting about this comment is the reference to Wernicke-Korsakoff syndrome, which can be marked by confabulation. Clinically, it’s really not clear how this comes about in AI although thiamine deficiency is the main cause in WKS. In both, it involves inventing information, which is technically not the same as lying.

Unfortunately, this contrasts sharply with the recent fact checking Snopes article I wrote about recently, which suggests that humans are teaching AI to lie and scheme.

In any case, it’s prudent to regard AI productions with skepticism. My conversations with Google Bard clearly elicited confabulation. Also, it didn’t get humor, so I wouldn’t use it as a conversational tool, given that I’m prone to kidding around. As far as trusting AI, I probably wouldn’t trust it as far as I could throw it.