Artificial Intelligence: The University of Iowa Chat From Old Cap

This is just a quick follow-up which will allow me to clarify a few things about Artificial Intelligence (AI) in medicine at the University of Iowa, compared with my take on it based on my impressions of the Rounding@Iowa presentation recently. Also, prior to my writing this post, Sena and I had a spirited conversation about how much we are annoyed by our inability to, in her words, “dislodge AI” from our internet searches.

First of all, I should say that my understanding of the word “ambient” as used by Dr. Misurac was flawed, probably because I assumed it meant a specific company name. I found out that it’s often used as a term to describe how AI listens in the background to a clinic interview between clinician and patient. This is to enable the clinician to sit with the patient so they can interact with each other more naturally in real time, face to face.

Further, in this article about AI at the University of Iowa, Dr. Misurac identified the companies involved by name as Evidently and Nabla.

The other thing I want to do in this post is to highlight the YouTube presentation “AI Impact on Healthcare | The University of Iowa Chat From the Old Cap.” I think this is a fascinating discussion led by leaders in patient care, research, and teaching as they relate to the influence of AI.

This also allows me to say how much I appreciated learning from Dr. Lauris Kaldjian during my time working as a psychiatric consultant in the general hospital at University of Iowa Health Care. I respect his judgment very much and I hope you’ll see why. You can read more about his thoughts in this edition of Iowa Magazine.

“There must be constant navigation and negotiation to determine if this is for the good of patients. And the good of patients will continue to depend on clinicians who can demonstrate virtues like compassion, honesty, courage, and practical wisdom, which are characteristics of persons, not computers.” ——Lauris Kaldjian, director of the Carver College of Medicine’s Program in Bioethics and Humanities

That Donut Song by Washboard Sam

I got a kick out of a song by Catfish Keith last night on the Big Mo Blues Show on KCCK radio. It was “Who Pumped the Wind in My Doughnut.” He always sings songs with lyrics that I mostly don’t understand and that was one of them, at first. I’ll give you a hint; it’s not a Christmas tune. Catfish Keith covers some old-time blues songs and this one is for adults only.

Judging from the title of the song and some of the lyrics, you might guess it’s about doughnuts but it’s not. Don’t bother with the Artificial Intelligence (AI) description, which I did not ask for. AI just pops up in a web search whether you want it to or not:

“Who pumped the wind in my doughnut” is a playful, nonsensical phrase meaning someone has exaggerated or inflated a situation or story to make it seem much bigger than it really is; essentially, they’ve added unnecessary drama or hype to something, like adding air to a doughnut to make it appear larger.”

Once again, we see that AI makes stuff up as it goes along, creating a little story which is really concrete and far from the truth about something for which it was not programmed—to process language that is not literal but a form of humor riddled with innuendo to express something about sexual infidelity, in this case resulting in a lot of children which don’t resemble the singer because they aren’t his.

Anyway, I found a little background on the song which was originally performed by a guy called Washboard Sam (born Robert Clifford Brown). He was a blues artist in the 1930s. He performed “Who Pumped the Hole in My Doughnut” under the name Ham Gravy. I found a reference which says that Washboard Sam performed it and Robert Brown wrote it. And I found another which shows a picture of the actual record which has the name Johnny Wilson on it with the name Ham Gravy just below it. I don’t know whether Johnny Wilson was just another pseudonym. You can find the lyrics of the song identifying it as being by “Washboard Sam via Johnny Wilson.”

You can find a mini-biography about Robert Brown on, of all things, a WordPress blog called The Fried Dough Ho. It has a fair number of posts about doughnuts too. The author knows the song is not about doughnuts. There are also some pretty comical impressions in a blog post entitled “What is he talking about?” regarding the meaning of the lyrics of the song on a Blogger site called The things I think about, when I wish I were sleeping. One of the comments is fairly recent, from 2023. You can also find a Wikipedia biography.

You may never feel the same about doughnuts.

Rounding At Iowa Podcast: “The Promises of Artificial Intelligence in Medicine”

I listened to the recent Rounding@Iowa podcast “The Promises of Artificial Intelligence in Medicine.” You can listen to it below. Those who read my blog already know I’m cautious and probably prejudiced against it, especially if you’ve read any of my posts about AI.

I was a little surprised at how enthusiastic Dr. Gerry Clancy sounded about AI. I expected his guest, Dr. Jason Misurac, to sound that way. I waited for Gerry to mention the hallucinations that AI can sometimes produce. Neither he nor Dr. Misurac said anything about them.

Dr. Misurac mentioned what I think is the Ambient AI tools that clinicians can use to make clinic note writing and chart reviews easier. I think he was referring to the company called Ambience.

I remember using the Dragon Naturally Speaking (which was not using AI technology at the time; see my post “The Dragon Breathes Fire Again”) speech to text disaster I tried to use years ago to write clinical notes when I was practicing consultation-liaison psychiatry. It was a disaster and I realize I’m prejudiced against any technology that would make the kind of mistakes that technology was prone to.

But more importantly, I’m concerned about the kind of mistakes AI made when I experimented with Google Bard on my blog (see posts entitled “How’s It Hanging Bard?” and “Update to Chat with Bard” in April of 2023.

That reminds me that I’ve seen the icon for AI assistant lurking around my blog recently. I’ve tried to ignore it but I can’t unsee it. I was planning to let the AI assistant have a stab at editing this post so you and I can see what happens. However, I just read the AI Guidelines (which everyone should do), and it contains one warning which concerned me:

We don’t claim any ownership over the content you generate with our AI features. Please note that you might not have complete ownership over the generated content either! For example, the content generated by AI may be similar to others’ content, including content that may be protected by trademark or copyright; and copyright ownership of generative AI outputs may vary around the world.”

That is yet another reason why I’m cautious about using AI.

87: New Treatment Options for Menopause Rounding@IOWA

Join Dr. Clancy and his guests, Drs. Evelyn Ross-Shapiro, Sarah Shaffer, and Emily Walsh, as they discuss the complex set of symptoms and treatment options for those with significant symptoms from menopause.  CME Credit Available:  https://uiowa.cloud-cme.com/course/courseoverview?P=0&EID=81895  Host: Gerard Clancy, MD Senior Associate Dean for External Affairs Professor of Psychiatry and Emergency Medicine University of Iowa Carver College of Medicine Guests: Evelyn RossShapiro, MD, MPH Clinical Assistant Professor of Internal Medicine Clinic Director, LGBTQ Clinic University of Iowa Carver College of Medicine Sarah Shaffer, DO Clinical Associate Professor of Obstetrics and Gynecology Vice Chair for Education, Department of Obstetrics and Gynecology University of Iowa Carver College of Medicine Emily Walsh, PharmD, BCACP Clinical Pharmacy Specialist Iowa Health Care Financial Disclosures:  Dr. Gerard Clancy, his guests, and Rounding@IOWA planning committee members have disclosed no relevant financial relationships. Nurse: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this activity for a maximum of 1.00 ANCC contact hour. Physician: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this enduring material for a maximum of 1.00 AMA PRA Category 1 CreditTM. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Other Health Care Providers: A certificate of completion will be available after successful completion of the course. (It is the responsibility of licensees to determine if this continuing education activity meets the requirements of their professional licensure board.) References/Resources:   
  1. 87: New Treatment Options for Menopause
  2. 86: Cancer Rates in Iowa
  3. 85: Solutions for Rural Health Workforce Shortages
  4. 84: When to Suspect Atypical Recreational Substances
  5. 83: Hidradenitis Suppurativa

Fluoride in Your Precious Bodily Fluids

Yesterday, Sena and I talked about a recent news article indicating that a federal judge ordered the Environmental Protection Agency (EPA) to review the allowed level of fluoride in community water supplies. The acceptable level may not be low enough, in the opinion of the advocacy groups who discussed the issue with the judge, according to the author of the article.

A few other news items accented the role of politicians on this issue. This seems to come up every few years. One thing leads to another and I noticed a few other web stories about the divided opinions about fluoride in “your precious bodily fluids.” One of them is a comprehensive review published in 2015 outlining the complicated path of scientific research about this topic. There are passionate advocates on both sides of whether or not to allow fluoride in city water. The title of the paper is, “Debating Water Fluoridation Before Dr. Strangelove” (Carstairs C. Debating Water Fluoridation Before Dr. Strangelove. Am J Public Health. 2015 Aug;105(8):1559-69. doi: 10.2105/AJPH.2015.302660. Epub 2015 Jun 11. PMID: 26066938; PMCID: PMC4504307.)

This of course led to our realizing that we’ve never seen the film “Dr. Strangelove Or: How I Learned to Stop Worrying And Love the Bomb,” a satire on the Cold War. We watched the entire movie on the Internet Archive yesterday afternoon. The clip below shows one of the funniest scenes, a dialogue between General Jack Ripper and RAF officer Lionel Mandrake about water and fluoridation.

During my web search on the fluoridation topic, one thing I noticed about the Artificial Intelligence (AI) entry on the web was the first line of its summary of the film’s plot: “In the movie Dr. Strangelove, the character Dr. Cox suggests adding fluoride to drinking water to improve oral health.” Funny, I don’t remember a character named Dr. Cox in the film nor the recommendation about adding fluoride to drinking water to improve oral health. Peter Sellers played 3 characters, none of them named Cox.

I guess you can’t believe everything AI says, can you? That’s called “hallucinating” when it comes to debating the trustworthiness of AI. I’m not sure what you call it when politicians say things you can’t immediately check the veracity of.

Anyway, one Iowa expert who regularly gets tapped by reporters about it is Dr. Steven Levy, a professor of preventive and community dentistry at the University of Iowa. He’s the leader of the Iowa Fluoride Study, which has been going on over the last several years. In short, Dr. Levy says fluoride in water supplies is safe and effective for preventing tooth decay in as long as the level is adjusted within safe margins.

On the other hand, others say fluoride can be hazardous and could cause neurodevelopmental disorders.

I learned that, even in Iowa there’s disagreement about the health merits vs risks of fluoridated water. Decisions about whether or not city water supplies are fluoridated are generally left to the local communities. Hawaii is the only state in the union which mandates a statewide ban on fluoride. About 90 per cent of Iowa’s cities fluoridate the water. Tama, Iowa stopped fluoridating the water in 2021. Then after a brief period of public education about it, Tama restarted fluoridating its water only six months later.

We use a fluoridated dentifrice and oral rinse every day. We drink fluoridated water, which we offer to the extraterrestrials who occasionally abduct us, but they politely decline because of concern about their precious bodily fluids.

Dirty Deepfakes

I saw an article about the unreliable ability of humans to detect digital deepfakes in audio and video productions (Mai KT, Bray S, Davies T, Griffin LD. Warning: Humans cannot reliably detect speech deepfakes. PLoS One. 2023 Aug 2;18(8):e0285333. doi: 10.1371/journal.pone.0285333. PMID: 37531336; PMCID: PMC10395974.).

I was a little surprised. I thought I was pretty good at detecting the weird cadence of Artificial Intelligence (AI) speech patterns, which I think I can distinguish pretty well. Maybe not.

And there are some experts who are concerned about AI’s ability to mimic written and spoken grammar—but it continues to make stuff up (called “hallucinations”). In fact, some research shows that AI can display great language skills but can’t form a true model of the world.

And the publisher of the book (“Psychosomatic Medicine: An Introduction to Consultation-Liaison Psychiatry”) that I and my co-editor, Dr. Robert G. Robinson, MD wrote 14 years ago is still sending me requests to sign a contract addendum that would allow the text to be used by AI organizations. I think I’m the only who gets the messages because they’re always sent to me and Bob—as though Bob lives with me or something.

Sometimes my publisher’s messages sound like they’re written by AI. Maybe I’m just paranoid.

Anyway, this reminds me of a blog post I wrote in 2011, “Going from Plan to Dirt,” which I re-posted last year under the title “Another Blast from the Past.” Currently, this post is slightly different although it still applies. I don’t think AI can distinguish plan from dirt and sometimes makes up dirt, simply put.

And if humans can’t distinguish the productions by AI from those of humans, where does that leave us?

AI Does Your Laundry

Recently we had somebody from the appliance store check our brand-new washing machine. The tech said “the noises are normal”—and then told us that many of the functions of the washer are run by Artificial Intelligence (AI). That was a new one on us.

Don’t get me wrong. The washer works. What sticks in the craw a little is that many of the settings we took for granted as being under our control are basically run by AI nowadays. I guess that means you can override some of the AI assist settings (which may be adjusted based on grime level, type of fabrics and the relative humidity in Botswana)—at least the ones not mandated by the EPA.

Incidentally, I tried to find some free images to use as featured images for this post. The problem is, many free pictures on the web are generated by AI these days, which is why I used the non-AI part of the Microsoft Paint app to make a crude drawing of an AI controlled washing machine.

I realize I’ll have to give up and accept the inevitable takeover of much of human society by AI. On the other hand, the prospect reminds me of the scene in an X-Files episode, “Ghost in the Machine.” A guy gets exterminated by something called the Central Operating System (COS).

Use extra detergent and add more water at your own risk.

Can Artificial Intelligence Learn from My Book?

Recently the publisher of a book co-edited by me and my former psychiatry chair Dr. Robert G. Robinson asked me to sign off on a proposal to involve Artificial Intelligence (AI) in using the work.

The book, “Psychosomatic Medicine: An Introduction to Consultation-Liaison Psychiatry,” is 14 years old now, but is still a practical tool for learners, at least in my opinion.

Interestingly, it looks like the publisher is also trying to contact Dr. Robinson about the proposal through me. I wonder if that means they’re having as much trouble finding him as I have.

I’ve made it clear that I have misgivings about AI, as I’ve indicated in at least one blog post about Google’s AI, which used to be called, Bard which I think has been renamed Gemini. I think AI is prone to spreading misinformation, which has been called “hallucinations” by some writers.

The publisher makes it clear that this proposal regarding AI involvement in our book is an “opt in” choice. I gather that means if I don’t opt in, they’ll continue to bug me about it until I do.

That’s unlikely to happen.

About That Artificial Intelligence…

I’ve got a couple of things to get off my chest about Artificial Intelligence (AI). By now, everyone knows about AI telling people to put hot glue on pizza and whatnot. Sena and I talked to a guy at an electronics store who had nothing but good things to say about AI. I mentioned the hot glue thing and pizza and it didn’t faze him.

I noticed the Psychiatric Times article, “AI in Psychiatry: Things Are Moving Fast.” They mention the tendency for AI to hallucinate and expressed appropriate reservations about its limitations.

And then I found something very interesting about AI and Cribbage. How much does AI know about the game? Turns out not much. Any questions? Don’t expect AI to answer them accurately.

Plant Based Cheese Made with Artificial Intelligence Is Only The Beginning!

We tasted plant-based cheese by Kraft yesterday. Sena bought it at Hy-Vee the other day. It’s actually not bad. The company is called Kraft NotCo. They make Not Cheese. It’s made with chickpeas, which are the same thing as garbanzo beans. You can also buy plant-based mayo, called Not Mayo. I don’t know if it’s made with chickpeas.

Sena could have got Not Mayo; instead, she got Miracle Whip—a miracle by itself because she likes “real” mayo.

What’s really interesting about these products is how they’re made. On the Kraft Heinz NotCo website, you’ll find a description of these products in the About section entitled “Not Your Average Joint Venture.” One line is thought-provoking:

“Our partnership reimagines the brands you love from Kraft Heinz using proprietary AI from NotCo to give you the plant-based version of your favorite foods that deliver on taste and performance.”

I’m assuming that AI stands for Artificial Intelligence (not “Absolutely Inedible”). So, how did Artificial Intelligence get involved? What does the AI actually do? Does it come up with the recipes for Not Foods? Are tiny bits of genetic code and nanobots involved?

Does this mean we’ll become enslaved by AI powered men in black who conspire with extraterrestrials to collect human embryos to create the giant Cheese Bots who take over the earth making it a gigantic assembly line to make smartphones that will make it easier to butt dial your congress persons to demand more laws making Home Owners Associations covenants mandatory and violators punishable by the giant garbage goblin in the well-known X-Files documentary “Arcadia”?

No; no, it does not mean that. You can safely eat AI manufactured chickpea products without fear of being transformed into an Extraterrestrial-Robot-Not Cheese hybrid super soldier marching on Washington, D.C. to force feed congress persons with Braunschweiger and Not Cheese Sandwiches with Not Mayo on Not Wheat Bread and Not Lemonade.

I kind of like Not Cheese and I don’t feel any different.

The Dragon Breathes Fire Again

Sena and I saw a news video about a technology called “DAX” which uses Artificial Intelligence (AI) the other day which promises to reduce or even eliminate pajama time for physicians trying to get their clinical note dictations done during the day instead of taking them home for several more hours of work.

The video was a demo of the technology, which looked like it recorded a clinical interview between the doctor and the news reporter. I didn’t see how exactly DAX was recording the interview without obvious audio equipment. Was it doing it through the smartphone speaker? This was very different from how I and many other clinicians dictated their notes using a headphone set at their desks in front of their desktop computers. It not only records but transcribes the interview.

Later, I discovered that DAX stands for Dragon Ambient Experience, made by Nuance which was acquired by Microsoft in 2022. I posted about Dragon products and their limitations last year. The product often produced hilarious mistakes during dictation which required careful editing. Sometimes more errors turned up after you completed it and these were visible in the patient’s medical record, which would then need to be corrected.

Several years ago, I remember talking to somebody from Dragon on the phone about the problems I was having. She was a little defensive when I told her I’d been having difficulty with Dragon for quite a while because it made so many mistakes.

A recent article on the web revealed that the errors continue with DAX. According to the article, “…it will make mistakes. Sometimes it might omit clinical facts; sometimes it may even hallucinate something.” I remember trying to communicate with the Google Bard AI, which seemed to do this pretty often. It made stuff up.

DAX is not cheap. The article reveals that one hospital pays $8,000-$10,000 per year per physician to use it. And skeptics worry that the system has too many bugs in it yet, which can lead to bias and inaccurate information which could negatively affect patient outcomes.

A recently published JAMA network article also urges caution in adoption of this sort of AI-assisted technology (Harris JE. An AI-Enhanced Electronic Health Record Could Boost Primary Care Productivity. JAMA. Published online August 07, 2023. doi:10.1001/jama.2023.14525).

In this case, I think it’s appropriate to say “I told you so.”