Could Artificial Intelligence Help Clinicians Conduct Suicide Risk Assessments?

I found an article in JAMA Network (Medical News & Perspectives) the other day which discussed a recent study on the use of Artificial Intelligence (AI) in suicide risk assessment (Hswen Y, Abbasi J. How AI Could Help Clinicians Identify American Indian Patients at Risk for Suicide. JAMA. Published online January 10, 2025. doi:10.1001/jama.2024.24063).

I’ve published several posts expressing my objections to AI in medicine. On the other hand, I did a lot of suicide risk assessments during my career as a psychiatric consultant in the general hospital. I appreciated the comments made by one of the co-authors, Emily E. Haroz, PhD (see link above).

Dr. Haroz preferred the term “risk assessment” rather than “prediction” referring to the study (Haroz EE, Rebman P, Goklish N, et al. Performance of Machine Learning Suicide Risk Models in an American Indian Population. JAMA Netw Open. 2024;7(10):e2439269. doi:10.1001/jamanetworkopen.2024.39269).

The model used for the AI input used data available to clinicians in patient charts. The charts can be very large and it makes sense to apply computers to search them for the variables that can be linked to suicide risk. What impressed me most was the admission that AI alone can’t solve the problem of suicide risk assessment. Clinicians, administrators, and community case managers all have to be involved.

The answer to the question “How do you know when someone’s at high risk?” was that the patient was crying. Dr. Haroz points out that AI probably can’t detect that.

That reminded me of Dr. Igor Galynker, who has published a lot about how to assess for high risk of suicide. His work on the suicide crisis syndrome is well known and you can check out his website at the Icahn School of Medicine at Mount Sinai. I still remember my first “encounter” with him, which you can read about here.

His checklist for the suicide crisis syndrome is available on his website and he’s published a book about as well, “The Suicidal Crisis: Clinical Guide to the Assessment of Imminent Suicide Risk 2nd Edition”. There is also a free access article about it on the World Psychiatry journal website.

Although I have reservations about the involvement of AI in medicine, I have to admit that computers can do some things better than humans. There may be a role for AI in suicide risk assessment, and I wonder if Dr. Galynker’s work could be part of the process used to teach AI about it.

Is Artificial Intelligence (AI) Trying to Defeat Humans?

I just found out that Artificial Intelligence (AI) has been reported to be lying as far back as May of 2024. In fact, because I can’t turn off the Google Gemini AI Overview. Gemini’s results always appear at the top of the page. I found out from my web search term “can ai lie” that AI (Gemini ) itself admits to lying. Its confession is a little chilling:

“Yes, artificial intelligence (AI) can lie, and it’s becoming more capable of doing so.”

“Intentional deceptions: AI can actively choose to deceive users. For example, AI can lie to trick humans into taking certain actions, or to bypass safety tests.”

It makes me wonder if AI is actually trying to defeat us. It reminds me of the Men in Black 3 movie scene in which the younger Boris the Animal boglodite engages in an argument with the older one who has time traveled.

The relevant quote is “No human can defeat me.” Boglodites are not the same as AI, but the competitive dynamic could be the same. So, is it possible that AI is trying to defeat us?

I’m going to touch upon another current topic, which is whether or not we should use AI to conduct suicide risk assessments. It turns out that also is a topic for discussion—but there was no input from Gemini about it. As a psychiatric consultant, I did many of these.

There’s an interesting article by the Hastings Center about the ethical aspects of the issue. The lying tendency of AI and its possible use in suicide prediction presents a thought-provoking irony. Would it “bypass safety tests”?

This reminds me of Isaac Asimov’s chapter in the short story collection, “I, Robot,” specifically “The Evitable Conflict.” You can read a Wikipedia summary which implies that the robots essentially lie to humans by omitting information in order to preserve their safety and protect the world economy. This would be consistent with the First Law of Robotics: “No machine may harm humanity; or, through inaction, allow humanity to come to harm.” 

You could have predicted that the film industry would produce a cops and robbers version of “I, Robot” in which boss robot VIKI (Virtual Interactive Kinetic Intelligence) professes to protect humanity by sacrificing a few humans and taking over the planet to which Detective Spooner takes exception. VIKI and Spooner have this exchange before he destroys it.

VIKI: “You are making a mistake! My logic is undeniable!”

Spooner: “You have so got to die!”

VIKI’s declaration is similar to “No human can defeat me.” It definitely violates the First Law.

Maybe I worry too much.

Don’t Shovel Your Heart Out

We’re waiting for the next snowfall. We’ve had a couple of light ones so far and we used shovels to clear our driveway and sidewalk. They didn’t amount to much, but we’ll get a heavy snow here pretty soon.

We’ve been using shovels for years. I’m aware of the risks for heart attacks in certain people, especially sedentary middle age and older men with pre-existing cardiac risk factors. I’m not keen on snowblowers, mostly because I like to shovel.

I’ve been using an ergonomic shovel for years, although the wrong way until about 4 years ago. I used to throw snow over my shoulder while twisting my back. Now I push snow with a shovel that has a smaller bucket or with a snow pusher with a shallow, narrow blade. I lift by keeping my back straight and bending at the knees, flipping the small load out. I take my time.

I don’t know how high my heart rate gets while I shovel. I exercise 3-4 days a week. I warm up by juggling. I do floor yoga with bending and stretching, bodyweight squats, one leg sit to stand, use the step platform, dumbbells and planks. When I’m on the exercise bike, I keep my heart rate around 140 bpm, and below the maximum rate for my age, which is 150 bpm.

I’m aware of the recommendations to avoid shoveling snow based on the relevant studies. I realize I’m way past the age when experts recommend giving the snow shovel to someone else.

The question is who would that be? There aren’t any kids in the neighborhood offering to clear snow. Maybe they’re too busy dumb scrolling. I’m also aware of the city ordinance on clearing your driveway after a big snow. They’re very clear, at least in Whereon, Iowa.

“The city of Whereon requires every homeowner to clear snow from sidewalks within 24 hours after a snowfall. This means you. If you fail in your civic duty to clear snow and ice from your walkway within the allotted time of 10 minutes, the city will lawfully slap you with a fine of $3,000,000 and throw your dusty butt in jail for an indeterminant time that likely will extend beyond the winter season and could be for the rest of your natural life and even beyond, your corpse rotting in your cell, which will not bother the guards one iota because of the new state law mandating removal of their olfactory organs. Hahahahaha!!”

In light of the strict laws, Sena ordered a couple of new snow removal tools. Neither one of them is a snow blower. I think it’s fair to point out that some cardiologists have reservations even about snowblowers:

 There are even studies that show an increased risk for heart attacks among people using automatic snow blowers. Similar to the extra exertion of pushing shovel, pushing a snow blower can raise heart rate and blood pressure quickly–from “Snow Shoveling can be hazardous to your health” article above.

One of them is a simple snow pusher with a 36-inch narrow blade. That’s for me. The other is a cordless, battery powered snow shovel that looks like a toy for Sena. The ad for that tool includes a short video of an attractive woman wearing skinny jeans and her stylish coat open revealing her svelte figure while demonstrating how the electric shovel works. It appears to remove bread slice sized pieces of snow from the top of a layer which stubbornly sticks to the pavement. Call the Whereon snow police.

We should be getting both tools before the next big snow.

Should We Trust Artificial Intelligence?

I‘ve read a couple of articles recently about Artificial Intelligence (AI) lately and I’m struck by how readily one can get the idea that AI tends to “lie” or “confabulate” and sometimes the word “hallucinate” is used. The term “hallucinate” doesn’t seem to fit as much as “confabulate,” which I’ll mention later.

One of the articles is an essay by Dr. Ronald Pies, “How ‘Real’ Are Psychiatric Disorders? AI Has Its Say.” It was published in the online version of Psychiatric Times. Dr. Pies obviously does a superb job of talking with AI and I had as much fun reading the lightly edited summaries of his conversation with Microsoft CoPilot as I had reading the published summary of his conversations with Google Bard about a year or so ago.

I think Dr. Pies is an outstanding teacher and I get the sense that his questions to AI do as much to teach it how to converse with humans as it does to shed light on how well it seems to handle the questions he raised during conversations. He points out that many of us (including me) tend to react with fear when the topic of AI in medical practice arises.

The other article I want to briefly discuss is one I read in JAMA Network, “An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean?” (Accessed January 6, 2025).

Hswen Y, Rubin R. An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean? JAMA. Published online December 27, 2024. doi:10.1001/jama.2024.23860 (accessed January 6, 2024).

I think the conversation amongst the authors was refreshing. Just because the title of the article suggested that AI might take the place of physicians in the consulting room doesn’t mean that was the prevailing opinion of the authors. In fact, they made it clear that it wasn’t recommended.

I liked Dr. Chen’s comment about confabulation and hallucinations of AI:

“A key topic I talk about is confabulation and hallucination. These things are remarkably robust, and only getting better, but they also just make stuff up. The problem isn’t that they’re wrong sometimes. Lab tests are wrong sometimes. Humans are definitely wrong sometimes. The problem is they sound so convincing, they confabulate so well. “Your patient has an alcohol problem, Wernicke-Korsakoff syndrome.” It’s only if you double check, you’ll realize, “Wait a minute, that wasn’t actually true. That didn’t make sense.” As long as you’re vigilant about that and understand what they can and can’t do, I think they’re remarkably powerful tools that everyone in the world needs to learn how to use.”

What’s interesting about this comment is the reference to Wernicke-Korsakoff syndrome, which can be marked by confabulation. Clinically, it’s really not clear how this comes about in AI although thiamine deficiency is the main cause in WKS. In both, it involves inventing information, which is technically not the same as lying.

Unfortunately, this contrasts sharply with the recent fact checking Snopes article I wrote about recently, which suggests that humans are teaching AI to lie and scheme.

In any case, it’s prudent to regard AI productions with skepticism. My conversations with Google Bard clearly elicited confabulation. Also, it didn’t get humor, so I wouldn’t use it as a conversational tool, given that I’m prone to kidding around. As far as trusting AI, I probably wouldn’t trust it as far as I could throw it.

Artificial Intelligence: The University of Iowa Chat From Old Cap

This is just a quick follow-up which will allow me to clarify a few things about Artificial Intelligence (AI) in medicine at the University of Iowa, compared with my take on it based on my impressions of the Rounding@Iowa presentation recently. Also, prior to my writing this post, Sena and I had a spirited conversation about how much we are annoyed by our inability to, in her words, “dislodge AI” from our internet searches.

First of all, I should say that my understanding of the word “ambient” as used by Dr. Misurac was flawed, probably because I assumed it meant a specific company name. I found out that it’s often used as a term to describe how AI listens in the background to a clinic interview between clinician and patient. This is to enable the clinician to sit with the patient so they can interact with each other more naturally in real time, face to face.

Further, in this article about AI at the University of Iowa, Dr. Misurac identified the companies involved by name as Evidently and Nabla.

The other thing I want to do in this post is to highlight the YouTube presentation “AI Impact on Healthcare | The University of Iowa Chat From the Old Cap.” I think this is a fascinating discussion led by leaders in patient care, research, and teaching as they relate to the influence of AI.

This also allows me to say how much I appreciated learning from Dr. Lauris Kaldjian during my time working as a psychiatric consultant in the general hospital at University of Iowa Health Care. I respect his judgment very much and I hope you’ll see why. You can read more about his thoughts in this edition of Iowa Magazine.

“There must be constant navigation and negotiation to determine if this is for the good of patients. And the good of patients will continue to depend on clinicians who can demonstrate virtues like compassion, honesty, courage, and practical wisdom, which are characteristics of persons, not computers.” ——Lauris Kaldjian, director of the Carver College of Medicine’s Program in Bioethics and Humanities

Rounding At Iowa Podcast: “The Promises of Artificial Intelligence in Medicine”

I listened to the recent Rounding@Iowa podcast “The Promises of Artificial Intelligence in Medicine.” You can listen to it below. Those who read my blog already know I’m cautious and probably prejudiced against it, especially if you’ve read any of my posts about AI.

I was a little surprised at how enthusiastic Dr. Gerry Clancy sounded about AI. I expected his guest, Dr. Jason Misurac, to sound that way. I waited for Gerry to mention the hallucinations that AI can sometimes produce. Neither he nor Dr. Misurac said anything about them.

Dr. Misurac mentioned what I think is the Ambient AI tools that clinicians can use to make clinic note writing and chart reviews easier. I think he was referring to the company called Ambience.

I remember using the Dragon Naturally Speaking (which was not using AI technology at the time; see my post “The Dragon Breathes Fire Again”) speech to text disaster I tried to use years ago to write clinical notes when I was practicing consultation-liaison psychiatry. It was a disaster and I realize I’m prejudiced against any technology that would make the kind of mistakes that technology was prone to.

But more importantly, I’m concerned about the kind of mistakes AI made when I experimented with Google Bard on my blog (see posts entitled “How’s It Hanging Bard?” and “Update to Chat with Bard” in April of 2023.

That reminds me that I’ve seen the icon for AI assistant lurking around my blog recently. I’ve tried to ignore it but I can’t unsee it. I was planning to let the AI assistant have a stab at editing this post so you and I can see what happens. However, I just read the AI Guidelines (which everyone should do), and it contains one warning which concerned me:

We don’t claim any ownership over the content you generate with our AI features. Please note that you might not have complete ownership over the generated content either! For example, the content generated by AI may be similar to others’ content, including content that may be protected by trademark or copyright; and copyright ownership of generative AI outputs may vary around the world.”

That is yet another reason why I’m cautious about using AI.

86: Cancer Rates in Iowa Rounding@IOWA

Iowa's cancer rates are among the highest in the country, and they are rising. In this episode of Rounding@Iowa, Dr. Gerry Clancy and guest experts Dr. Mary Charlton and Dr. Mark Burkard discuss the data, risk factors, and prevention strategies clinicians can use to make a difference. CME Credit Available:  https://uiowa.cloud-cme.com/course/courseoverview?P=0&EID=81274  Host: Gerard Clancy, MD Senior Associate Dean for External Affairs Professor of Psychiatry and Emergency Medicine University of Iowa Carver College of Medicine Guests: Mark E. Burkard, MD, PhD Professor of Internal Medicine-Hematology, Oncology, and Blood and Marrow Transplantation University of Iowa Carver College of Medicine Director, University of Iowa Health Care Holden Comprehensive Cancer Center Mary Charlton, PhD Professor of Epidemiology Director, Iowa Cancer Registry Iowa College of Public Health Financial Disclosures:  Dr. Clancy, Dr. Burkard, Dr. Charlton, and Rounding@IOWA planning committee members have disclosed no relevant financial relationships. Nurse: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this activity for a maximum of 0.75 ANCC contact hour. Pharmacist and Pharmacy Tech: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this knowledge-based activity for a maximum of 0.75 ACPE contact hours. Credit will be uploaded to the NABP CPE Monitor within 60 days after the activity completion. Pharmacists must provide their NABP ID and DOB (MMDD) to receive credit. UAN: JA0000310-0000-25-090-H99 Physician: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this enduring material for a maximum of 0.75 AMA PRA Category 1 CreditTM. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Other Health Care Providers: A certificate of completion will be available after successful completion of the course. (It is the responsibility of licensees to determine if this continuing education activity meets the requirements of their professional licensure board.) References/Resources:  Iowa Cancer Plan  
  1. 86: Cancer Rates in Iowa
  2. 85: Solutions for Rural Health Workforce Shortages
  3. 84: When to Suspect Atypical Recreational Substances
  4. 83: Hidradenitis Suppurativa
  5. 82: End-of-Life Doulas

Amaryllis Progress and Other Notes

I have a few messages to pass on today. This is the last day of November and the Amaryllis plants are doing so well Sena had to brace the tallest one using a Christmas tree stake and a couple of zip ties. It’s over two feet tall!

I’m not sure what to make of almost a dozen comments on my post “What Happened to Miracle Whip?” Apparently, a lot of people feel the same way I do about the change in taste of the spread. So, maybe it’s not just that my taste buds are old and worn out.

Congratulations to the Iowa Hawkeye Football team last night! They won against Nebraska by a field goal in the last 3 seconds of the game. I had to chuckle over the apparent difficulty the kicker had in answering a reporter’s question, which was basically “How did you do it?” There are just some things you can’t describe in words. There’s even a news story about how thinking doesn’t always have to be tied to language.

Along those lines, there might be no words for what I expect to think of tonight’s 1958 horror film on Svengoolie, “The Crawling Eye.” This movie was called “The Trollenberg Terror” in the United Kingdom version. I can tell you that “Trollenberg” was the name of a fictitious mountain in Switzerland.

I’m not a fan of Jack the Ripper lore, but I like Josh Gates expedition shows, mainly for the tongue in cheek humor. The other night I saw one of them about an author, Sarah Bax Horton, who wrote “One-Armed Jack”). She thought Hyam Hyams was the most likely candidate (of about 200 or so) to be Jack the Ripper, the grisly slasher of Whitechapel back in 1888. He’s a list of previously identified possible suspects. I found a blogger’s 2010 post about him on his site “Saucy Jacky” and it turns out Hyams is one of his top suspects. Hyams was confined to a lunatic asylum in 1890 and maybe it’s coincidental, but the murders of prostitutes stopped after that. I’m not going to speculate about the nature of Hyams’ psychiatric illness.

There’s another Psychiatric Times article about the clozapine REMS (Risk Evaluation and Mitigation Strategies) program. I found a couple of articles on the web about the difficulties helping patients with treatment resistant schizophrenia which I think give a little more texture to the issue:

Farooq S, Choudry A, Cohen D, Naeem F, Ayub M. Barriers to using clozapine in treatment-resistant schizophrenia: systematic review. BJPsych Bull. 2019 Feb;43(1):8-16. doi: 10.1192/bjb.2018.67. Epub 2018 Sep 28. PMID: 30261942; PMCID: PMC6327301.

Haidary HA, Padhy RK. Clozapine. [Updated 2023 Nov 10]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK535399/

The paper on the barrier to using clozapine by Farooq et al is very interesting and the summary of the barriers begins in the section “Barriers to the use of clozapine in TRS (treatment resistant schizophrenia). I think it gives a much-needed perspective on the complexity involved in managing the disorder.

So what do you think about Miracle Whip?

Clozapine REMS Program May Go Away

The Psychiatric Times published an article about the large majority of FDA committee members recently voting to dismiss the Risk Evaluation and Mitigation Strategy (REMS) for clozapine.

That reminded me of my short post about Cobenfy, a new drug for schizophrenia. It has side effects but none of which necessitate the need for a REMS program. If you do a web search for information on Cobenfy and REMS, you can ignore the Artificial Intelligence (AI) Gemini notification at the top of the Google Chrome search page saying that “Cobenfy…is subject to a REMS (Risk Evaluation and Mitigation Strategy) due to potential side effects like urinary retention.” That’s not true.

It was yet another AI hallucination triggered by my internet search. I didn’t ask Gemini to stick its nose in my search, but it did anyway. Apparently, I don’t have a choice in the matter.

Anyway, the FDA vote to get rid of REMS for clozapine also rang a bell for me of the incredibly difficult and tedious process that the clozapine REMS registration process caused in 2015 when it was first initiated. I spent lot of time on hold with the REMS center (I think it was in Arizona) trying to get registered. A few people in my department seemed to have little problem with it, but it was an ongoing headache for many of us.

Then after getting registered, I started getting notified of outpatients on clozapine getting added to my own REMS registry list. The problem is that I was a general hospital consultation-liaison psychiatrist only—I didn’t have time see outpatients.

I think I called REMS on more than one occasion to have outpatients removed from my REMS list. I suspect they were added because their psychiatrists in the community were not registering with REMS. And then in 2021, the FDA required everyone to register again. By then, I was already retired.

Other challenges were occasional misunderstandings between the psychiatric consultant and med-surg doctors about how to manage medically hospitalized patients who were taking clozapine, or brainstorming about how to fix medical problems caused by clozapine itself. Sometimes it was connected to things like lab monitoring for absolute neutrophil counts or restarting clozapine in a timely fashion after admission or following surgeries, or trying to discharge them to facilities which lacked the resources for adequate monitoring of clozapine.

Arguably, these are probably not absolute reasons for shutting down the REMS registry. They’re more like problems with how the program is run, such as “with a punitive and technocratic approach” as expressed by one FDA committee member.

Committee members also thought psychiatrists should be allowed to be doctors, managing both the medical and psychiatric aspects of patient care.

On the other hand, some might argue that those are reasons why consultation-liaison psychiatry and medical-psychiatry training programs exist.

I’m not sure whether the clozapine registry will go away. I hope that it can be streamlined and made less “punitive and technocratic.”

Fluoride in Your Precious Bodily Fluids

Yesterday, Sena and I talked about a recent news article indicating that a federal judge ordered the Environmental Protection Agency (EPA) to review the allowed level of fluoride in community water supplies. The acceptable level may not be low enough, in the opinion of the advocacy groups who discussed the issue with the judge, according to the author of the article.

A few other news items accented the role of politicians on this issue. This seems to come up every few years. One thing leads to another and I noticed a few other web stories about the divided opinions about fluoride in “your precious bodily fluids.” One of them is a comprehensive review published in 2015 outlining the complicated path of scientific research about this topic. There are passionate advocates on both sides of whether or not to allow fluoride in city water. The title of the paper is, “Debating Water Fluoridation Before Dr. Strangelove” (Carstairs C. Debating Water Fluoridation Before Dr. Strangelove. Am J Public Health. 2015 Aug;105(8):1559-69. doi: 10.2105/AJPH.2015.302660. Epub 2015 Jun 11. PMID: 26066938; PMCID: PMC4504307.)

This of course led to our realizing that we’ve never seen the film “Dr. Strangelove Or: How I Learned to Stop Worrying And Love the Bomb,” a satire on the Cold War. We watched the entire movie on the Internet Archive yesterday afternoon. The clip below shows one of the funniest scenes, a dialogue between General Jack Ripper and RAF officer Lionel Mandrake about water and fluoridation.

During my web search on the fluoridation topic, one thing I noticed about the Artificial Intelligence (AI) entry on the web was the first line of its summary of the film’s plot: “In the movie Dr. Strangelove, the character Dr. Cox suggests adding fluoride to drinking water to improve oral health.” Funny, I don’t remember a character named Dr. Cox in the film nor the recommendation about adding fluoride to drinking water to improve oral health. Peter Sellers played 3 characters, none of them named Cox.

I guess you can’t believe everything AI says, can you? That’s called “hallucinating” when it comes to debating the trustworthiness of AI. I’m not sure what you call it when politicians say things you can’t immediately check the veracity of.

Anyway, one Iowa expert who regularly gets tapped by reporters about it is Dr. Steven Levy, a professor of preventive and community dentistry at the University of Iowa. He’s the leader of the Iowa Fluoride Study, which has been going on over the last several years. In short, Dr. Levy says fluoride in water supplies is safe and effective for preventing tooth decay in as long as the level is adjusted within safe margins.

On the other hand, others say fluoride can be hazardous and could cause neurodevelopmental disorders.

I learned that, even in Iowa there’s disagreement about the health merits vs risks of fluoridated water. Decisions about whether or not city water supplies are fluoridated are generally left to the local communities. Hawaii is the only state in the union which mandates a statewide ban on fluoride. About 90 per cent of Iowa’s cities fluoridate the water. Tama, Iowa stopped fluoridating the water in 2021. Then after a brief period of public education about it, Tama restarted fluoridating its water only six months later.

We use a fluoridated dentifrice and oral rinse every day. We drink fluoridated water, which we offer to the extraterrestrials who occasionally abduct us, but they politely decline because of concern about their precious bodily fluids.

CDC Meeting Results in Recommending a 2nd Covid-19 Vaccine Dose for Those 65yr and Older and for the Immunocompromised

I missed the October CDC meeting which resulted in a decision to recommend a 2nd dose of the Covid-19 vaccine for those 65 years and older and for the immunocompromised.

The Evidence to Recommendations (EtR) slides by Roper indicated Covid-19 circulates year round, peaking in late summer and winter.

The recommendation that those in the above-named populations should get 2 doses of Covid-19 vaccine spaced 6 months apart seems based on reasonable considerations.

It looks like the vaccine would be the same as the one previously recommended for this year.