Could Artificial Intelligence Help Clinicians Conduct Suicide Risk Assessments?

I found an article in JAMA Network (Medical News & Perspectives) the other day which discussed a recent study on the use of Artificial Intelligence (AI) in suicide risk assessment (Hswen Y, Abbasi J. How AI Could Help Clinicians Identify American Indian Patients at Risk for Suicide. JAMA. Published online January 10, 2025. doi:10.1001/jama.2024.24063).

I’ve published several posts expressing my objections to AI in medicine. On the other hand, I did a lot of suicide risk assessments during my career as a psychiatric consultant in the general hospital. I appreciated the comments made by one of the co-authors, Emily E. Haroz, PhD (see link above).

Dr. Haroz preferred the term “risk assessment” rather than “prediction” referring to the study (Haroz EE, Rebman P, Goklish N, et al. Performance of Machine Learning Suicide Risk Models in an American Indian Population. JAMA Netw Open. 2024;7(10):e2439269. doi:10.1001/jamanetworkopen.2024.39269).

The model used for the AI input used data available to clinicians in patient charts. The charts can be very large and it makes sense to apply computers to search them for the variables that can be linked to suicide risk. What impressed me most was the admission that AI alone can’t solve the problem of suicide risk assessment. Clinicians, administrators, and community case managers all have to be involved.

The answer to the question “How do you know when someone’s at high risk?” was that the patient was crying. Dr. Haroz points out that AI probably can’t detect that.

That reminded me of Dr. Igor Galynker, who has published a lot about how to assess for high risk of suicide. His work on the suicide crisis syndrome is well known and you can check out his website at the Icahn School of Medicine at Mount Sinai. I still remember my first “encounter” with him, which you can read about here.

His checklist for the suicide crisis syndrome is available on his website and he’s published a book about as well, “The Suicidal Crisis: Clinical Guide to the Assessment of Imminent Suicide Risk 2nd Edition”. There is also a free access article about it on the World Psychiatry journal website.

Although I have reservations about the involvement of AI in medicine, I have to admit that computers can do some things better than humans. There may be a role for AI in suicide risk assessment, and I wonder if Dr. Galynker’s work could be part of the process used to teach AI about it.

Is Artificial Intelligence (AI) Trying to Defeat Humans?

I just found out that Artificial Intelligence (AI) has been reported to be lying as far back as May of 2024. In fact, because I can’t turn off the Google Gemini AI Overview. Gemini’s results always appear at the top of the page. I found out from my web search term “can ai lie” that AI (Gemini ) itself admits to lying. Its confession is a little chilling:

“Yes, artificial intelligence (AI) can lie, and it’s becoming more capable of doing so.”

“Intentional deceptions: AI can actively choose to deceive users. For example, AI can lie to trick humans into taking certain actions, or to bypass safety tests.”

It makes me wonder if AI is actually trying to defeat us. It reminds me of the Men in Black 3 movie scene in which the younger Boris the Animal boglodite engages in an argument with the older one who has time traveled.

The relevant quote is “No human can defeat me.” Boglodites are not the same as AI, but the competitive dynamic could be the same. So, is it possible that AI is trying to defeat us?

I’m going to touch upon another current topic, which is whether or not we should use AI to conduct suicide risk assessments. It turns out that also is a topic for discussion—but there was no input from Gemini about it. As a psychiatric consultant, I did many of these.

There’s an interesting article by the Hastings Center about the ethical aspects of the issue. The lying tendency of AI and its possible use in suicide prediction presents a thought-provoking irony. Would it “bypass safety tests”?

This reminds me of Isaac Asimov’s chapter in the short story collection, “I, Robot,” specifically “The Evitable Conflict.” You can read a Wikipedia summary which implies that the robots essentially lie to humans by omitting information in order to preserve their safety and protect the world economy. This would be consistent with the First Law of Robotics: “No machine may harm humanity; or, through inaction, allow humanity to come to harm.” 

You could have predicted that the film industry would produce a cops and robbers version of “I, Robot” in which boss robot VIKI (Virtual Interactive Kinetic Intelligence) professes to protect humanity by sacrificing a few humans and taking over the planet to which Detective Spooner takes exception. VIKI and Spooner have this exchange before he destroys it.

VIKI: “You are making a mistake! My logic is undeniable!”

Spooner: “You have so got to die!”

VIKI’s declaration is similar to “No human can defeat me.” It definitely violates the First Law.

Maybe I worry too much.

Big Mo Pod Show: “Blues: The Universal Mixer”

We listened to the Big Mo Pod Show (Sena stuck with it for about the first hour anyway) last night and then I got a mini-education in the forms of music, at least, as it relates to timing and rhythm. The theme was “Blues: The Universal Mixer.” Frequently, the blues show and the podcast remind me of previous eras in my life and lead to a few free associations.

Big Mo Pod Show 085 – “California Bluesin” KCCK's Big Mo Pod Show

After a short break during the Thanksgiving holiday your hosts are back at it again with another episode! This week features the usual mix of blues eras you’ve come to expect along with a few Californian artists, tune in to see which ones! Songs featured in the episode: Solomon Hicks – “Further On Up The … Continue reading
  1. Big Mo Pod Show 085 – “California Bluesin”
  2. Big Mo Pod Show 084 – “Garage Blues”
  3. Big Mo Pod Show 083 – “Legal Pirate radio”
  4. Big Mo Pod Show 082 – “Tribute”
  5. Big Mo Pod Show 081 – “Cheers To Kevin”

The 5 songs reviewed by Big Mo and Noah are probably recognizable to many listeners. As usual, I have to search for the lyrics because I seem to have an inborn tendency to hearing mondegreens. And as usual, I don’t always pay the most attention to the songs chosen for the podcast.

But Big Mo did a little teaching session about rhythm forms, which he related to a couple of songs on the list. One of them was “Wait on Time” by The Fabulous Thunderbirds. I happened to notice that a couple of lines in the lyrics of “Wait on Time” reminded me of another artist who didn’t make it to the list on the podcast but was on the blues show playlist last night. That was Junior Walker and the All Stars. Their song “I’m a Road Runner” was one of my favorites because it reminded me of how I ran all over the hospital as a consult psychiatrist. But I can’t relate to the song as a whole.

The lines the two songs share are:

“Wait on Time” lyrics:

“Well, I live the life I love
And I love the life I live
The life I live baby
Is all I have to give”

“I’m a Road Runner” lyrics:

“And I live the life I love
And I’m gonna love the life I live
Yes, I’m a roadrunner, baby.”

Although the lyrics are similar, the themes are different. The guy in the song “Wait On Time” is promising he’ll get back to his lover someday. On the other hand, in the “I’m a Road Runner” lyrics, that guy is making no such promise and is actually is saying just the opposite.

Big Mo pointed out that there is a common rhythmic form in blues that easily mix with other forms of music, including Latin forms (I don’t understand that music lingo but I think I hear and feel what he means). He mentions that Bo Diddley mixed certain rhythms like that into his music, which surprised me because I didn’t know that. It may be why I like Bo Diddley.

Big Mo didn’t play “I’m a Road Runner” last night but played another hit from Junior Walker and the All Stars: “Ain’t That The Truth.” Just an aside, that tune is mostly instrumental and has a total of only 4 lines apparently, which express a common blues sentiment about relationships:

“Say man, what’s wrong with you?
Oh man, my woman done left me
Say it, man, play me some blues, jack
Get it, baby
Ain’t that the truth”

Several artists covered “I’m a Road Runner” including but not limited to the Grateful Dead and Steppenwolf. Bo Diddley did a song called “Road Runner” but it was not the Junior Walker tune. There’s a YouTube video relating it to the cartoon Roadrunner and Wile E. Coyote.

I’m not a roadrunner by any definition, but I learn a little something new just about every time I hear the Big Mo Pod Show.

Artificial Intelligence Can Lie

I noticed a Snopes fact check article (“AI Models Were Caught Lying to Researchers in Tests — But It’s Not Time to Worry Just Yet”) today which reveals that Artificial Intelligence (AI) can lie. How about that? They can be taught by humans to scheme and lie. I guess we could all see that coming—or not. Nobody seems to be much alarmed by this, but I think it’s probably past time to worry.

Then I remembered I read Isaac Asimov’s book “I, Robot” last year and wrote a post (“Can Robots Lie Like a Rug?”) about the chapter “Liar!” I had previously horsed around with the Google AI that used to be called Bard. I think it’s called Gemini now. Until the Snopes article, I was aware of AI hallucinations and the tendency for it to just make stuff up. When I called Bard on it, it just apologized. But it was not genuinely repentant.

In the “lie like a rug” post, I focused mostly on AI/robots lying to protect the tender human psyche. I didn’t imagine AI lying to protect itself from being shut down. I’m pretty sure it reminds some of us of HAL in the movie “2001: A Space Odyssey,” or the 2004 movie inspired by Asimov’s book, “I, Robot.”

Sena found out that Cambridge University Press recently published a book entitled “The Cambridge Handbook of the Law, Policy, and Regulation for Human–Robot Interaction.” I wonder if the editors and contributors of book on AI and robots mention Asimov.

It reminds me of my own handbook about consultation-liaison psychiatry which was published 14 years ago by CUP—and which CUP now wants me to sign a contract addendum making the book available to AI companies.

I haven’t signed anything.

Artificial Intelligence: The University of Iowa Chat From Old Cap

This is just a quick follow-up which will allow me to clarify a few things about Artificial Intelligence (AI) in medicine at the University of Iowa, compared with my take on it based on my impressions of the Rounding@Iowa presentation recently. Also, prior to my writing this post, Sena and I had a spirited conversation about how much we are annoyed by our inability to, in her words, “dislodge AI” from our internet searches.

First of all, I should say that my understanding of the word “ambient” as used by Dr. Misurac was flawed, probably because I assumed it meant a specific company name. I found out that it’s often used as a term to describe how AI listens in the background to a clinic interview between clinician and patient. This is to enable the clinician to sit with the patient so they can interact with each other more naturally in real time, face to face.

Further, in this article about AI at the University of Iowa, Dr. Misurac identified the companies involved by name as Evidently and Nabla.

The other thing I want to do in this post is to highlight the YouTube presentation “AI Impact on Healthcare | The University of Iowa Chat From the Old Cap.” I think this is a fascinating discussion led by leaders in patient care, research, and teaching as they relate to the influence of AI.

This also allows me to say how much I appreciated learning from Dr. Lauris Kaldjian during my time working as a psychiatric consultant in the general hospital at University of Iowa Health Care. I respect his judgment very much and I hope you’ll see why. You can read more about his thoughts in this edition of Iowa Magazine.

“There must be constant navigation and negotiation to determine if this is for the good of patients. And the good of patients will continue to depend on clinicians who can demonstrate virtues like compassion, honesty, courage, and practical wisdom, which are characteristics of persons, not computers.” ——Lauris Kaldjian, director of the Carver College of Medicine’s Program in Bioethics and Humanities

Cataplexy and Catalepsy in the Movie “The Comedy of Terrors”

We watched the Svengoolie TV movie last night, “The Comedy of Terrors.” It was my third time seeing it. I wrote a blog post about it in March 2024 partly because the condition of catalepsy is mentioned. Mr. Black’s butler points out that Mr. Black had periods of catalepsy. Much to my surprise, I didn’t write anything about distinguishing cataplexy and catalepsy, but last night I thought about the differences. I finally found a summary of the plot today on the Svengoolie website and you can see it on Turner Classic Movies. You can still see the movie on the Internet Archive.

You see Mr. Black have his “cataleptic” attack about 39 minutes or so into the film. It appears to be triggered by shocked surprise upon discovering Mr. Gille in his house. A bit later, after the butler fetches the doctor, the first scene is that of Mr. Black’s wide-open eyes, which the doctor closes, at the same time saying that he’s dead. In that same scene you hear the butler asking for confirmation because it’s well known that Mr. Black has had fits of “catalepsy” before. The doctor obliges only to confirm, in his opinion, that Mr. Black is dead. However, he wakes up in the funeral parlor, where he has a fight with Trumbull and Gillie, then suffers another abrupt collapse, one of many that occur, always reciting lines from Shakespeare presaged by asking “What place is this?” often from inside a coffin.

This movie made me think about the clinical differences between catalepsy (specific to catatonia) and cataplexy (specific to narcolepsy). Because I was a consultation-liaison psychiatrist, I saw many patients with catatonia. However, I can’t remember ever seeing patients with cataplexy. I had to review them by searching the web. I think the most helpful links are:

Catalepsy: Burrow JP, Spurling BC, Marwaha R. Catatonia. [Updated 2023 May 8]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK430842/

Catatonic patients often will be mute and immobile vs purposeless agitation. Waxy flexibility can be one of many features. Catatonia can occur in the context of variety of psychiatric or medical illnesses. They may wake up and talk within minutes if given a Lorazepam challenge test, which is given intravenously. It can look miraculous.

Cataplexy: Mirabile VS, Sharma S. Cataplexy. [Updated 2023 Jun 12]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK549782/

Cataplexy occurs in narcolepsy and is the sudden onset of muscle weakness, often precipitated by strong emotions, usually positive but can occur with negative emotions like fear. Eye movements can be normal, and episodes usually resolve within minutes.

Mr. Black’s episodes look like a strange mixture of catalepsy and cataplexy. His episodes are precipitated by fear or anger. Quoting Shakespeare doesn’t occur in either catalepsy or cataplexy.

At the end of the movie, he is impervious to bullets—a feature not seen in either condition.

Amaryllis Progress and Other Notes

I have a few messages to pass on today. This is the last day of November and the Amaryllis plants are doing so well Sena had to brace the tallest one using a Christmas tree stake and a couple of zip ties. It’s over two feet tall!

I’m not sure what to make of almost a dozen comments on my post “What Happened to Miracle Whip?” Apparently, a lot of people feel the same way I do about the change in taste of the spread. So, maybe it’s not just that my taste buds are old and worn out.

Congratulations to the Iowa Hawkeye Football team last night! They won against Nebraska by a field goal in the last 3 seconds of the game. I had to chuckle over the apparent difficulty the kicker had in answering a reporter’s question, which was basically “How did you do it?” There are just some things you can’t describe in words. There’s even a news story about how thinking doesn’t always have to be tied to language.

Along those lines, there might be no words for what I expect to think of tonight’s 1958 horror film on Svengoolie, “The Crawling Eye.” This movie was called “The Trollenberg Terror” in the United Kingdom version. I can tell you that “Trollenberg” was the name of a fictitious mountain in Switzerland.

I’m not a fan of Jack the Ripper lore, but I like Josh Gates expedition shows, mainly for the tongue in cheek humor. The other night I saw one of them about an author, Sarah Bax Horton, who wrote “One-Armed Jack”). She thought Hyam Hyams was the most likely candidate (of about 200 or so) to be Jack the Ripper, the grisly slasher of Whitechapel back in 1888. He’s a list of previously identified possible suspects. I found a blogger’s 2010 post about him on his site “Saucy Jacky” and it turns out Hyams is one of his top suspects. Hyams was confined to a lunatic asylum in 1890 and maybe it’s coincidental, but the murders of prostitutes stopped after that. I’m not going to speculate about the nature of Hyams’ psychiatric illness.

There’s another Psychiatric Times article about the clozapine REMS (Risk Evaluation and Mitigation Strategies) program. I found a couple of articles on the web about the difficulties helping patients with treatment resistant schizophrenia which I think give a little more texture to the issue:

Farooq S, Choudry A, Cohen D, Naeem F, Ayub M. Barriers to using clozapine in treatment-resistant schizophrenia: systematic review. BJPsych Bull. 2019 Feb;43(1):8-16. doi: 10.1192/bjb.2018.67. Epub 2018 Sep 28. PMID: 30261942; PMCID: PMC6327301.

Haidary HA, Padhy RK. Clozapine. [Updated 2023 Nov 10]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK535399/

The paper on the barrier to using clozapine by Farooq et al is very interesting and the summary of the barriers begins in the section “Barriers to the use of clozapine in TRS (treatment resistant schizophrenia). I think it gives a much-needed perspective on the complexity involved in managing the disorder.

So what do you think about Miracle Whip?

Clozapine REMS Program May Go Away

The Psychiatric Times published an article about the large majority of FDA committee members recently voting to dismiss the Risk Evaluation and Mitigation Strategy (REMS) for clozapine.

That reminded me of my short post about Cobenfy, a new drug for schizophrenia. It has side effects but none of which necessitate the need for a REMS program. If you do a web search for information on Cobenfy and REMS, you can ignore the Artificial Intelligence (AI) Gemini notification at the top of the Google Chrome search page saying that “Cobenfy…is subject to a REMS (Risk Evaluation and Mitigation Strategy) due to potential side effects like urinary retention.” That’s not true.

It was yet another AI hallucination triggered by my internet search. I didn’t ask Gemini to stick its nose in my search, but it did anyway. Apparently, I don’t have a choice in the matter.

Anyway, the FDA vote to get rid of REMS for clozapine also rang a bell for me of the incredibly difficult and tedious process that the clozapine REMS registration process caused in 2015 when it was first initiated. I spent lot of time on hold with the REMS center (I think it was in Arizona) trying to get registered. A few people in my department seemed to have little problem with it, but it was an ongoing headache for many of us.

Then after getting registered, I started getting notified of outpatients on clozapine getting added to my own REMS registry list. The problem is that I was a general hospital consultation-liaison psychiatrist only—I didn’t have time see outpatients.

I think I called REMS on more than one occasion to have outpatients removed from my REMS list. I suspect they were added because their psychiatrists in the community were not registering with REMS. And then in 2021, the FDA required everyone to register again. By then, I was already retired.

Other challenges were occasional misunderstandings between the psychiatric consultant and med-surg doctors about how to manage medically hospitalized patients who were taking clozapine, or brainstorming about how to fix medical problems caused by clozapine itself. Sometimes it was connected to things like lab monitoring for absolute neutrophil counts or restarting clozapine in a timely fashion after admission or following surgeries, or trying to discharge them to facilities which lacked the resources for adequate monitoring of clozapine.

Arguably, these are probably not absolute reasons for shutting down the REMS registry. They’re more like problems with how the program is run, such as “with a punitive and technocratic approach” as expressed by one FDA committee member.

Committee members also thought psychiatrists should be allowed to be doctors, managing both the medical and psychiatric aspects of patient care.

On the other hand, some might argue that those are reasons why consultation-liaison psychiatry and medical-psychiatry training programs exist.

I’m not sure whether the clozapine registry will go away. I hope that it can be streamlined and made less “punitive and technocratic.”

Dirty Deepfakes

I saw an article about the unreliable ability of humans to detect digital deepfakes in audio and video productions (Mai KT, Bray S, Davies T, Griffin LD. Warning: Humans cannot reliably detect speech deepfakes. PLoS One. 2023 Aug 2;18(8):e0285333. doi: 10.1371/journal.pone.0285333. PMID: 37531336; PMCID: PMC10395974.).

I was a little surprised. I thought I was pretty good at detecting the weird cadence of Artificial Intelligence (AI) speech patterns, which I think I can distinguish pretty well. Maybe not.

And there are some experts who are concerned about AI’s ability to mimic written and spoken grammar—but it continues to make stuff up (called “hallucinations”). In fact, some research shows that AI can display great language skills but can’t form a true model of the world.

And the publisher of the book (“Psychosomatic Medicine: An Introduction to Consultation-Liaison Psychiatry”) that I and my co-editor, Dr. Robert G. Robinson, MD wrote 14 years ago is still sending me requests to sign a contract addendum that would allow the text to be used by AI organizations. I think I’m the only who gets the messages because they’re always sent to me and Bob—as though Bob lives with me or something.

Sometimes my publisher’s messages sound like they’re written by AI. Maybe I’m just paranoid.

Anyway, this reminds me of a blog post I wrote in 2011, “Going from Plan to Dirt,” which I re-posted last year under the title “Another Blast from the Past.” Currently, this post is slightly different although it still applies. I don’t think AI can distinguish plan from dirt and sometimes makes up dirt, simply put.

And if humans can’t distinguish the productions by AI from those of humans, where does that leave us?

Reading My Old Book in a New Light

Sena bought me a wonderful new lamp to read by and it improves on the ceiling fan light I wrote about the other day (And Then a Light Bulb Went Off).”

The new lamp even has a nifty remote control with which you can choose the ambient feel. There are several selections, one of which is called “breastfeed mode,” a new one on me. There’s a light for that?

The lamp arrived at about the same time I got a notice from my publisher for my one and only book, “Psychosomatic Medicine: An Introduction to Consultation-Liaison Psychiatry,” that people are still buying—after 14 years! My co-editor was my former psychiatry department chair, Dr. Robert G. Robinson. As far as I know, Bob has dropped off the face of the earth. I hope he’s well.

Consultation-Liaison Psychiatry is probably about the same as I left it when I retired 4 years ago. I walked all over the hospital trying to help my colleagues in medicine provide the best possible care for their patients. I put in several miles and stair steps a day. I saw myself as a fireman of sorts, putting out fires all over the hospital. I got a gift of a toy fire engine from a psychiatrist blogger in New York a long time ago.

Now I walk several miles on the Clear Creek Trail, like I did yesterday and the day before that. I have shin splints today, which tells me something—probably overdid it.

So, I’m taking a break from walking and reading an old book in a new light.