Confabulation in Artificial Intelligence

I saw an excellent post on Artificial Intelligence (AI) in Dr. George Dawson’s post today, “The Problems With AI Are More Readily Apparent.” He mentions AI producing confabulations which are sometimes called hallucinations in AI. I distinguish hallucinations from confabulations because the former are sensory or perceptual problems while confabulations are making things up. In dementia, confabulations are often described as attempts to fill in gaps in knowledge or memory without any intention to deceive.

I sometimes read that AI is being sycophantic in its interactions with humans, which sounds a little strange to me. The definition of sycophancy is more about trying to please someone else by being obsequiously flattering. It’s puzzling but I have gotten that impression from AI, yet it’s a machine and I don’t know how anyone would program AI to be sycophantic because it’s a human trait.

On the other hand, I got a comically sycophantic reply from AI to my question inquiring about the definition of Calvinball Cribbage. It’s just a made-up term and, to be fair, I’ve also gotten bluntly frank denials from AI that there is any such thing as Calvinball Cribbage.

As a reminder, Calvinball is a term that arose from a comic strip called Calvin and Hobbes. In the strip, the characters invent a game called Calvinball and the only rule is making up the rules as you go along, although there is a goal of capturing your opponent’s flag. But you can’t play it the same way twice.

The most recent hilarious AI confabulation I’ve previously posted about was when Sena and I played our game of 11-card cribbage. There are cribbage players out there who make fun of such variations from standard 6-card cribbage and the name for it from one such player was “Calvinball crib.” My simple question was as usual, “What is Calvinball Cribbage?” The reply is copied below:

“Calvinball Cribbage is a humorous mashup of the chaotic, rule-free game from Calvin & Hobbes with traditional Cribbage, often involving themed boards or rule modifications, where the core idea is applying Calvinball’s fluid, make-it-up-as-you-go spirit to Cribbage’s scoring and play, adding wild, non-sensical rules like using mallets, changing hand values, or scoring points for silly actions, making each round unique and unpredictable, not a single defined game but a creative approach to Cribbage. 

What it is (The Calvinball Aspect):

  • No Fixed Rules: Just like original Calvinball, there’s no single set of rules; the players invent them as they go.
  • Rule Changes: A key rule is that rules can’t be repeated, and players constantly invent new scoring, actions, or penalties.
  • Props & Actions: You might use mallets, balls, trees, or make up actions like “scoring three points for pointing at a cloud” or “losing a turn for looking left”. 

How it Applies to Cribbage:

  • Themed Boards: You can find or make custom cribbage boards featuring Calvin & Hobbes, adding visual flair.
  • Modified Scoring: Instead of just points for pairs and runs, you might add a “Hobbes Bonus” for a specific card or penalize for “too much logic”.
  • Strategic Chaos: Players still play cribbage (cards, pegs, crib), but the gameplay is constantly disrupted by invented rules, turning strategic plays into potential jokes or absurd victories. 

Example Calvinball Cribbage “Rules”:

  • “If you play a 7, you must hop on one foot until your next turn”.
  • “The crib gets 5 points if it contains a card with a face on it” (referencing Calvin’s drawings).
  • “A ‘skunk’ is achieved by making your opponent laugh out loud”. 

In essence, Calvinball Cribbage is about injecting unpredictable fun and silliness into the structured world of cribbage, making it a game of spontaneous creativity rather than strict strategy. “

This could be an example of sycophantic confabulation because almost all of it was probably cooked up by AI from scanning my blog posts about the subject. What’s a little eerie about it is that I never published any specific rules or strategies about the fictional game of Calvinball crib. You can actually find and purchase a Calvin and Hobbes cribbage board from a shop in Canada.

While this is an example of the occasional comicality of AI, that shouldn’t lead us to deny that it can do scary things, some of which Dr. Dawson describes. My early experiences a couple of years ago with what used to be called Google Bard were somewhat alarming because it confabulated then. And it still does.

Artificial Intelligence in Managing Messages from Patients

I ran across another interesting article in the JAMA Network about Artificial Intelligence (AI) with respect to health care organization managing messages from patients to doctors and nurse. The shorthand for this in the article is “in-basket burden.” Health care workers respond to a large number of patients’ questions and it can lead to burnout. Some organizations are trying to test AI by letting it make draft replies to patients. The results of the quality improvement study were published in a paper:

English E, Laughlin J, Sippel J, DeCamp M, Lin C. Utility of Artificial Intelligence–Generative Draft Replies to Patient Messages. JAMA Netw Open. 2024;7(10):e2438573. doi:10.1001/jamanetworkopen.2024.38573

One of the fascinating things about this is the trouble we have naming the problems with misinformation that AI has. We tend to use a couple of terms interchangeably: hallucinations and confabulation. Whatever you call it, the problem interferes with communication between health care workers and patients.

Dr. English describes the interference as a “whack-a-mole” issue, meaning every time they think they got the hallucination/confabulation problem licked, the AI comes up with another case of miscommunication.

Just for fun, I did a web search trying to find out whether “hallucination” or “confabulation” fit the AI behavior best. Computer experts tend to use the term “hallucination” and neuropsychologists seem to prefer “confabulation.” I think this community chat site gives a pretty even-handed discussion of the distinction. I prefer the term “confabulation.”

Anyway, there are other substantive issues with how using AI drafts for patient messaging affects communication. I think it’s interesting that patients tend to think AI is more empathetic than medical practitioners. As Dr. English puts it: “This GPT is nicer than most of us,” and “And ChatGPT, or any LLM, isn’t busy. It doesn’t get bored. It doesn’t get tired.” The way that’s worded made me think of a scene from a movie:

OK, so I’m kidding—a little. I think it’s important to move carefully down the path of idealizing AI. I think back to the recent news article about humans teaching AI how to lie and scheme. I remember that I searched the web with the question “Can AI lie?” and getting a reply from Gemini because I have no choice on whether or not it gives me its two cents. I’m paraphrasing but it said essentially, “Yes, AI can lie and we’re getting better with practice.”

I like Dr. English’s last statement, in which she warns us that AI can be a fun tool which clinicians need to have a healthy skepticism about. It may say things you might be tempted to gloss over or even ignore, like:

“I’ll be back.”