Music Therapy in End of Life Care Podcast: Rounding@Iowa

I just wanted to make a quick shout-out to Dr. Gerry Clancy, MD and Music Therapist Katey Kooi about the great Rounding@Iowa podcast today. The discussion ran the gamut from how to employ music to help patients who suffer from acute pain, agitation due to delirium and dementia, all the way up to even a possible role for Artificial Intelligence in the hospital and hospice.

87: New Treatment Options for Menopause Rounding@IOWA

Join Dr. Clancy and his guests, Drs. Evelyn Ross-Shapiro, Sarah Shaffer, and Emily Walsh, as they discuss the complex set of symptoms and treatment options for those with significant symptoms from menopause.  CME Credit Available:  https://uiowa.cloud-cme.com/course/courseoverview?P=0&EID=81895  Host: Gerard Clancy, MD Senior Associate Dean for External Affairs Professor of Psychiatry and Emergency Medicine University of Iowa Carver College of Medicine Guests: Evelyn RossShapiro, MD, MPH Clinical Assistant Professor of Internal Medicine Clinic Director, LGBTQ Clinic University of Iowa Carver College of Medicine Sarah Shaffer, DO Clinical Associate Professor of Obstetrics and Gynecology Vice Chair for Education, Department of Obstetrics and Gynecology University of Iowa Carver College of Medicine Emily Walsh, PharmD, BCACP Clinical Pharmacy Specialist Iowa Health Care Financial Disclosures:  Dr. Gerard Clancy, his guests, and Rounding@IOWA planning committee members have disclosed no relevant financial relationships. Nurse: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this activity for a maximum of 1.00 ANCC contact hour. Physician: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this enduring material for a maximum of 1.00 AMA PRA Category 1 CreditTM. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Other Health Care Providers: A certificate of completion will be available after successful completion of the course. (It is the responsibility of licensees to determine if this continuing education activity meets the requirements of their professional licensure board.) References/Resources:   
  1. 87: New Treatment Options for Menopause
  2. 86: Cancer Rates in Iowa
  3. 85: Solutions for Rural Health Workforce Shortages
  4. 84: When to Suspect Atypical Recreational Substances
  5. 83: Hidradenitis Suppurativa

Could Artificial Intelligence Help Clinicians Conduct Suicide Risk Assessments?

I found an article in JAMA Network (Medical News & Perspectives) the other day which discussed a recent study on the use of Artificial Intelligence (AI) in suicide risk assessment (Hswen Y, Abbasi J. How AI Could Help Clinicians Identify American Indian Patients at Risk for Suicide. JAMA. Published online January 10, 2025. doi:10.1001/jama.2024.24063).

I’ve published several posts expressing my objections to AI in medicine. On the other hand, I did a lot of suicide risk assessments during my career as a psychiatric consultant in the general hospital. I appreciated the comments made by one of the co-authors, Emily E. Haroz, PhD (see link above).

Dr. Haroz preferred the term “risk assessment” rather than “prediction” referring to the study (Haroz EE, Rebman P, Goklish N, et al. Performance of Machine Learning Suicide Risk Models in an American Indian Population. JAMA Netw Open. 2024;7(10):e2439269. doi:10.1001/jamanetworkopen.2024.39269).

The model used for the AI input used data available to clinicians in patient charts. The charts can be very large and it makes sense to apply computers to search them for the variables that can be linked to suicide risk. What impressed me most was the admission that AI alone can’t solve the problem of suicide risk assessment. Clinicians, administrators, and community case managers all have to be involved.

The answer to the question “How do you know when someone’s at high risk?” was that the patient was crying. Dr. Haroz points out that AI probably can’t detect that.

That reminded me of Dr. Igor Galynker, who has published a lot about how to assess for high risk of suicide. His work on the suicide crisis syndrome is well known and you can check out his website at the Icahn School of Medicine at Mount Sinai. I still remember my first “encounter” with him, which you can read about here.

His checklist for the suicide crisis syndrome is available on his website and he’s published a book about as well, “The Suicidal Crisis: Clinical Guide to the Assessment of Imminent Suicide Risk 2nd Edition”. There is also a free access article about it on the World Psychiatry journal website.

Although I have reservations about the involvement of AI in medicine, I have to admit that computers can do some things better than humans. There may be a role for AI in suicide risk assessment, and I wonder if Dr. Galynker’s work could be part of the process used to teach AI about it.

Is Artificial Intelligence (AI) Trying to Defeat Humans?

I just found out that Artificial Intelligence (AI) has been reported to be lying as far back as May of 2024. In fact, because I can’t turn off the Google Gemini AI Overview. Gemini’s results always appear at the top of the page. I found out from my web search term “can ai lie” that AI (Gemini ) itself admits to lying. Its confession is a little chilling:

“Yes, artificial intelligence (AI) can lie, and it’s becoming more capable of doing so.”

“Intentional deceptions: AI can actively choose to deceive users. For example, AI can lie to trick humans into taking certain actions, or to bypass safety tests.”

It makes me wonder if AI is actually trying to defeat us. It reminds me of the Men in Black 3 movie scene in which the younger Boris the Animal boglodite engages in an argument with the older one who has time traveled.

The relevant quote is “No human can defeat me.” Boglodites are not the same as AI, but the competitive dynamic could be the same. So, is it possible that AI is trying to defeat us?

I’m going to touch upon another current topic, which is whether or not we should use AI to conduct suicide risk assessments. It turns out that also is a topic for discussion—but there was no input from Gemini about it. As a psychiatric consultant, I did many of these.

There’s an interesting article by the Hastings Center about the ethical aspects of the issue. The lying tendency of AI and its possible use in suicide prediction presents a thought-provoking irony. Would it “bypass safety tests”?

This reminds me of Isaac Asimov’s chapter in the short story collection, “I, Robot,” specifically “The Evitable Conflict.” You can read a Wikipedia summary which implies that the robots essentially lie to humans by omitting information in order to preserve their safety and protect the world economy. This would be consistent with the First Law of Robotics: “No machine may harm humanity; or, through inaction, allow humanity to come to harm.” 

You could have predicted that the film industry would produce a cops and robbers version of “I, Robot” in which boss robot VIKI (Virtual Interactive Kinetic Intelligence) professes to protect humanity by sacrificing a few humans and taking over the planet to which Detective Spooner takes exception. VIKI and Spooner have this exchange before he destroys it.

VIKI: “You are making a mistake! My logic is undeniable!”

Spooner: “You have so got to die!”

VIKI’s declaration is similar to “No human can defeat me.” It definitely violates the First Law.

Maybe I worry too much.

Should We Trust Artificial Intelligence?

I‘ve read a couple of articles recently about Artificial Intelligence (AI) lately and I’m struck by how readily one can get the idea that AI tends to “lie” or “confabulate” and sometimes the word “hallucinate” is used. The term “hallucinate” doesn’t seem to fit as much as “confabulate,” which I’ll mention later.

One of the articles is an essay by Dr. Ronald Pies, “How ‘Real’ Are Psychiatric Disorders? AI Has Its Say.” It was published in the online version of Psychiatric Times. Dr. Pies obviously does a superb job of talking with AI and I had as much fun reading the lightly edited summaries of his conversation with Microsoft CoPilot as I had reading the published summary of his conversations with Google Bard about a year or so ago.

I think Dr. Pies is an outstanding teacher and I get the sense that his questions to AI do as much to teach it how to converse with humans as it does to shed light on how well it seems to handle the questions he raised during conversations. He points out that many of us (including me) tend to react with fear when the topic of AI in medical practice arises.

The other article I want to briefly discuss is one I read in JAMA Network, “An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean?” (Accessed January 6, 2025).

Hswen Y, Rubin R. An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean? JAMA. Published online December 27, 2024. doi:10.1001/jama.2024.23860 (accessed January 6, 2024).

I think the conversation amongst the authors was refreshing. Just because the title of the article suggested that AI might take the place of physicians in the consulting room doesn’t mean that was the prevailing opinion of the authors. In fact, they made it clear that it wasn’t recommended.

I liked Dr. Chen’s comment about confabulation and hallucinations of AI:

“A key topic I talk about is confabulation and hallucination. These things are remarkably robust, and only getting better, but they also just make stuff up. The problem isn’t that they’re wrong sometimes. Lab tests are wrong sometimes. Humans are definitely wrong sometimes. The problem is they sound so convincing, they confabulate so well. “Your patient has an alcohol problem, Wernicke-Korsakoff syndrome.” It’s only if you double check, you’ll realize, “Wait a minute, that wasn’t actually true. That didn’t make sense.” As long as you’re vigilant about that and understand what they can and can’t do, I think they’re remarkably powerful tools that everyone in the world needs to learn how to use.”

What’s interesting about this comment is the reference to Wernicke-Korsakoff syndrome, which can be marked by confabulation. Clinically, it’s really not clear how this comes about in AI although thiamine deficiency is the main cause in WKS. In both, it involves inventing information, which is technically not the same as lying.

Unfortunately, this contrasts sharply with the recent fact checking Snopes article I wrote about recently, which suggests that humans are teaching AI to lie and scheme.

In any case, it’s prudent to regard AI productions with skepticism. My conversations with Google Bard clearly elicited confabulation. Also, it didn’t get humor, so I wouldn’t use it as a conversational tool, given that I’m prone to kidding around. As far as trusting AI, I probably wouldn’t trust it as far as I could throw it.

Connection Between Cribbage and Obituaries?

Just for fun today (which is New Year’s Day of 2025) after Sena and I played a few games of cribbage, I searched the internet using the term “cribbage in Iowa.” I found a local newspaper story entitled “There’s No Crying in Cribbage. There’s No Politics Either,” published August 18, 2024 in the digital version of the Cedar Rapids Gazette which, by the way, promised me that I have “unlimited” access to articles.

It was written by Althea Cole and it was longer than I expected it to be. Much of the story was about the longstanding history of annual cribbage tournaments at the Iowa State Fair. She also mentioned that her grandfather had been an avid cribbage player.

But I was also puzzled by the significant number of obituaries that popped up in the web links. I’ve looked up cribbage dozens of times but not with this particular search term. I checked several of the obits and found the majority mentioned that the decedents had been avid cribbage players.

I’m not sure what to make of this. What does it mean that cribbage is associated with obituaries in Iowa? I suppose some would say that it might mean that cribbage is a game mainly played by old people—which is probably true. The American Cribbage Congress (ACC) web site makes it very clear that they encourage young people to play cribbage. Be patient, the site takes a while to load.

That said, whenever I see photos of people of playing cribbage, almost all appear to be over 50 years old.

So, I tried searching the web using the term “cribbage in Wisconsin” and didn’t get any obituaries. I got the same result with “cribbage in Minnesota” and “cribbage in Illinois.” I decided not to run the search for every state in the country, because I think the point is already made. For whatever reason, cribbage in Iowa seems to be associated with obituaries and advanced age.

I imagine some reading this post might point out that the connection with obituaries in Iowa and cribbage could just mean that a lot of Iowans enjoy cribbage. That could be true. However, on the ACC web site, I can find only one city in Iowa that has an ACC Grass Roots Club, and it’s in Des Moines.

There’s a web article entitled “Is cribbage too antiquated to survive this digital world? Players and board collectors sure hope not.” It was written by Rebecca Zandbergen in April 2023 and I reviewed it again today. One thing I can say about cribbage is that it’s probably good exercise for the brain. I can find plenty of article which praise cribbage as a way to keep your brain healthy and engage socially.

I don’t know if there are any scientific studies on the benefits of cribbage for your brain. I had trouble finding them on the web, although I admit I didn’t conduct anything like a thorough search. I did find one study on the association of playing cribbage with social connectedness.

Kitheka, Bernard & Comer, Ronald. (2023). Cribbage culture and social worlds: An analysis of closeness, inclusiveness, and specialization. Journal of Leisure Research. 54. 1-21. 10.1080/00222216.2022.2148145. Accessed January 1, 2025.

“Abstract: Recreation specialization through the lenses of social worlds is a common approach used to describe how people define and are defined by recreation activities. This ethnographic study investigates the social worlds of cribbage players. The study analyzes cultural structures through the lenses of closeness, inclusiveness, and recreation specialization. Using survey questionnaires, informal interviews, and researcher observations, data were collected at cribbage events over a period of 3 years. Findings reveal a distinct cribbage culture characterized by varying levels of commitment, specialization, and degrees of connectedness. The study contributes to the currently limited literature on social worlds and indoor recreation specialization. It provides insight as to how people align at a community level to find meaning via recreational activities. Data also reveals a lack of social diversity in the cribbage community. Findings could be used in leisure programming for diversity and inclusion at community and grassroots levels.”

There was also a paper entitled “Cribbage: An Excellent Exercise in Combinatorial Reasoning:

Markel, William. (2005). Cribbage: An Excellent Exercise in Combinatorial Thinking. The Mathematics Teacher. 98. 519-524. 10.5951/MT.98.8.0519. Accessed January 1, 2025.

Abstract: Card games have long been a rich source of combinatorial exercises. Indeed, determining the probabilities of obtaining various hands in poker, and often in bridge, has been standard fare for elementary texts in both probability and combinatorics. Examples involving the game of cribbage, however, seem rare. This omission is especially surprising when one considers that cribbage hands offer excellent applications of combinatorial reasoning.

It’s a math thing, which is good for brains. Math won’t kill you and neither will cribbage. Happy New Year!

Artificial Intelligence Can Lie

I noticed a Snopes fact check article (“AI Models Were Caught Lying to Researchers in Tests — But It’s Not Time to Worry Just Yet”) today which reveals that Artificial Intelligence (AI) can lie. How about that? They can be taught by humans to scheme and lie. I guess we could all see that coming—or not. Nobody seems to be much alarmed by this, but I think it’s probably past time to worry.

Then I remembered I read Isaac Asimov’s book “I, Robot” last year and wrote a post (“Can Robots Lie Like a Rug?”) about the chapter “Liar!” I had previously horsed around with the Google AI that used to be called Bard. I think it’s called Gemini now. Until the Snopes article, I was aware of AI hallucinations and the tendency for it to just make stuff up. When I called Bard on it, it just apologized. But it was not genuinely repentant.

In the “lie like a rug” post, I focused mostly on AI/robots lying to protect the tender human psyche. I didn’t imagine AI lying to protect itself from being shut down. I’m pretty sure it reminds some of us of HAL in the movie “2001: A Space Odyssey,” or the 2004 movie inspired by Asimov’s book, “I, Robot.”

Sena found out that Cambridge University Press recently published a book entitled “The Cambridge Handbook of the Law, Policy, and Regulation for Human–Robot Interaction.” I wonder if the editors and contributors of book on AI and robots mention Asimov.

It reminds me of my own handbook about consultation-liaison psychiatry which was published 14 years ago by CUP—and which CUP now wants me to sign a contract addendum making the book available to AI companies.

I haven’t signed anything.

Slow Like Me

I read this really interesting article on the web about how slow humans think, “The Unbelievable Slowness of Thinking.” That’s just me all over. The gist is of it is that we think slower than you’d imagine creatures with advanced brains would think. I don’t have a great feel for what the stated rate feels like: 10 bits/second. It doesn’t help much to look up the definition on Wikipedia, “…basic unit of information in computing and digital communication.” It’s a statistical thing and I barely made it through my biostatistics course in medical school. In fact, the authors calculated that the total amount of information a human could learn over a lifetime would fit on a small thumb drive.

Anyway, I tried to dig through the full article in Neuron and didn’t get much out of it except I recognized the story about Stephen Wiltshire, a guy with autistic disorder who could draw New York’s skyline from memory after a helicopter flyover. I saw it on an episode of the TV show UnXplained.

I was amazed even if he processes at only 10 bits/second, according to the authors.

By the same token, I thought the section called “The Musk Illusion” was funny. It refers to the notion many of us have that our thinking is a lot richer than some scientists give us credit for. That’s the Musk Illusion, and it refers to Elon Musk the billionaire bankrolling Neuralink to make a digital interface between his brain and a computer “to communicate at an unfettered pace.” How fast do the authors of this paper think the system works? It flies at about 10 bits/second. They suggested he might as well use a telephone. The scientists weren’t that impressed judging from the quotes in the Reception section of the Wikipedia article about Neuralink.

Anyway, regardless of how slowly we think, I believe we’ve made a lot of progress for a species descended from cave dwellers who were prone to falling into piles of mastodon dung. I bet it took considerably longer than 10 bits/second for them to figure out how to climb out of that mess.

Reference:

Jieyu Zheng, Markus Meister,

The unbearable slowness of being: Why do we live at 10 bits/s?

Neuron

2024,

ISSN 0896-6273,

Abstract: Summary

This article is about the neural conundrum behind the slowness of human behavior. The information throughput of a human being is about 10 bits/s. In comparison, our sensory systems gather data at ∼10 bits/s. The stark contrast between these numbers remains unexplained and touches on fundamental aspects of brain function: what neural substrate sets this speed limit on the pace of our existence? Why does the brain need billions of neurons to process 10 bits/s? Why can we only think about one thing at a time? The brain seems to operate in two distinct modes: the “outer” brain handles fast high-dimensional sensory and motor signals, whereas the “inner” brain processes the reduced few bits needed to control behavior. Plausible explanations exist for the large neuron numbers in the outer brain, but not for the inner brain, and we propose new research directions to remedy this.

Keywords: human behavior; speed of cognition; neural computation; bottleneck; attention; neural efficiency; information rate; memory sports

Artificial Intelligence: The University of Iowa Chat From Old Cap

This is just a quick follow-up which will allow me to clarify a few things about Artificial Intelligence (AI) in medicine at the University of Iowa, compared with my take on it based on my impressions of the Rounding@Iowa presentation recently. Also, prior to my writing this post, Sena and I had a spirited conversation about how much we are annoyed by our inability to, in her words, “dislodge AI” from our internet searches.

First of all, I should say that my understanding of the word “ambient” as used by Dr. Misurac was flawed, probably because I assumed it meant a specific company name. I found out that it’s often used as a term to describe how AI listens in the background to a clinic interview between clinician and patient. This is to enable the clinician to sit with the patient so they can interact with each other more naturally in real time, face to face.

Further, in this article about AI at the University of Iowa, Dr. Misurac identified the companies involved by name as Evidently and Nabla.

The other thing I want to do in this post is to highlight the YouTube presentation “AI Impact on Healthcare | The University of Iowa Chat From the Old Cap.” I think this is a fascinating discussion led by leaders in patient care, research, and teaching as they relate to the influence of AI.

This also allows me to say how much I appreciated learning from Dr. Lauris Kaldjian during my time working as a psychiatric consultant in the general hospital at University of Iowa Health Care. I respect his judgment very much and I hope you’ll see why. You can read more about his thoughts in this edition of Iowa Magazine.

“There must be constant navigation and negotiation to determine if this is for the good of patients. And the good of patients will continue to depend on clinicians who can demonstrate virtues like compassion, honesty, courage, and practical wisdom, which are characteristics of persons, not computers.” ——Lauris Kaldjian, director of the Carver College of Medicine’s Program in Bioethics and Humanities

Rounding At Iowa Podcast: “The Promises of Artificial Intelligence in Medicine”

I listened to the recent Rounding@Iowa podcast “The Promises of Artificial Intelligence in Medicine.” You can listen to it below. Those who read my blog already know I’m cautious and probably prejudiced against it, especially if you’ve read any of my posts about AI.

I was a little surprised at how enthusiastic Dr. Gerry Clancy sounded about AI. I expected his guest, Dr. Jason Misurac, to sound that way. I waited for Gerry to mention the hallucinations that AI can sometimes produce. Neither he nor Dr. Misurac said anything about them.

Dr. Misurac mentioned what I think is the Ambient AI tools that clinicians can use to make clinic note writing and chart reviews easier. I think he was referring to the company called Ambience.

I remember using the Dragon Naturally Speaking (which was not using AI technology at the time; see my post “The Dragon Breathes Fire Again”) speech to text disaster I tried to use years ago to write clinical notes when I was practicing consultation-liaison psychiatry. It was a disaster and I realize I’m prejudiced against any technology that would make the kind of mistakes that technology was prone to.

But more importantly, I’m concerned about the kind of mistakes AI made when I experimented with Google Bard on my blog (see posts entitled “How’s It Hanging Bard?” and “Update to Chat with Bard” in April of 2023.

That reminds me that I’ve seen the icon for AI assistant lurking around my blog recently. I’ve tried to ignore it but I can’t unsee it. I was planning to let the AI assistant have a stab at editing this post so you and I can see what happens. However, I just read the AI Guidelines (which everyone should do), and it contains one warning which concerned me:

We don’t claim any ownership over the content you generate with our AI features. Please note that you might not have complete ownership over the generated content either! For example, the content generated by AI may be similar to others’ content, including content that may be protected by trademark or copyright; and copyright ownership of generative AI outputs may vary around the world.”

That is yet another reason why I’m cautious about using AI.

87: New Treatment Options for Menopause Rounding@IOWA

Join Dr. Clancy and his guests, Drs. Evelyn Ross-Shapiro, Sarah Shaffer, and Emily Walsh, as they discuss the complex set of symptoms and treatment options for those with significant symptoms from menopause.  CME Credit Available:  https://uiowa.cloud-cme.com/course/courseoverview?P=0&EID=81895  Host: Gerard Clancy, MD Senior Associate Dean for External Affairs Professor of Psychiatry and Emergency Medicine University of Iowa Carver College of Medicine Guests: Evelyn RossShapiro, MD, MPH Clinical Assistant Professor of Internal Medicine Clinic Director, LGBTQ Clinic University of Iowa Carver College of Medicine Sarah Shaffer, DO Clinical Associate Professor of Obstetrics and Gynecology Vice Chair for Education, Department of Obstetrics and Gynecology University of Iowa Carver College of Medicine Emily Walsh, PharmD, BCACP Clinical Pharmacy Specialist Iowa Health Care Financial Disclosures:  Dr. Gerard Clancy, his guests, and Rounding@IOWA planning committee members have disclosed no relevant financial relationships. Nurse: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this activity for a maximum of 1.00 ANCC contact hour. Physician: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this enduring material for a maximum of 1.00 AMA PRA Category 1 CreditTM. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Other Health Care Providers: A certificate of completion will be available after successful completion of the course. (It is the responsibility of licensees to determine if this continuing education activity meets the requirements of their professional licensure board.) References/Resources:   
  1. 87: New Treatment Options for Menopause
  2. 86: Cancer Rates in Iowa
  3. 85: Solutions for Rural Health Workforce Shortages
  4. 84: When to Suspect Atypical Recreational Substances
  5. 83: Hidradenitis Suppurativa

Empty Pants Running Away! How Did They Do That?

Last night, I was watching the TV show Strange Evidence and noticed that they were going to show what’s been called the ghost pants video from a few years ago. I went to bed because I saw it on a similar show a few years ago. I doubted that it was solved yet. The clip shows a pair of white pants running down a street. You can’t see anyone wearing them.

There are a few interesting video-based paranormal TV shows. The one I think is pretty well done is The Proof is Out There, hosted by Tony Harris. I saw one which showed a photo of a girl whose image was different from her reflection in a mirror. The question was whether it was evidence for something paranormal, maybe proof of simulated reality.

Tony and the group of experts finally settled on it being unexplained. However, on a subsequent episode, Tony explained that someone had notified him that the photo was shot simply by using the panorama mode on a smartphone camera. It was relatively simple. Sena and I made a couple.

That was about the same time the YouTube video about the white ghost pants was circulating on the internet. Today I found a YouTube short video that shows essentially the same thing made by a couple of guys who also made a 10-minute video explaining how to achieve the effect. It’s below the short video. Of course, I don’t understand the technical explanation, but I think it might account for the ghost pants video.