Rife vs Ripe: Which is Right?

I noticed a couple of things about one of the President’s many new Executive Orders, which was “ESTABLISHING THE PRESIDENT’S MAKE AMERICA HEALTHY AGAIN COMMISSION,” or MAHA for short. It was posted on February 13, 2025. One thing it reminded me of is the tax filing season, which is upon us (everything reminds me of the tax filing season around this time of year). The other thing was a short article about the IRS, which is cutting staff sharply in response, probably as a response to the federal government workforce layoffs generally. One sentence in the article read:

“The IRS layoffs, first reported by the New York Times, come as part of a broader effort by President Donald Trump and Elon Musk’s overhaul of the federal government, which they argue is too bloated and inefficient, and ripe with waste and fraud.”

I put the word “ripe” in bold-face type because I sensed that the writer probably meant “rife” instead. I looked up the definitions of both just to make sure: Rife means abundant and ripe means mature (possibly overly mature as in smelly and ready for the garbage can).

I wonder if “rife” or “ripe” could apply to MAHA. I’m all for making us healthy. I agree with promoting health. I’m not sure what is meant by “assess the prevalence of and threat posed by the prescription of selective serotonin reuptake inhibitors, antipsychotics, mood stabilizers, stimulants, and weight-loss drugs.” It sounds like a shot across the bow for psychiatrists and primary care physicians.

Some of the content may be either “rife” (or is it “ripe”?) with potentially misleading innuendo, implying that health care professionals are not doing all we can already to promote health. I agree with promoting research into the “root causes” for mental illness. However, some people need psychiatric medications for “just managing disease.” Reducing the suffering of those who are tortured by depression and delusions and hallucinations makes sense because that’s the humane thing to do.

This reminds me of a very interesting article about what some scientists think about how life began on this planet and how it might start elsewhere in the universe. Some think life evolves mainly by chance, by a cosmic accident. Others think it’s inevitable and occurs when planetary conditions are right. So that might mean there’s a good chance there are extraterrestrials are out there. If they are, what would they think of us?

And this reminds me of a quote from the movie, Men in Black. Agent K is showing Edwards a universal translator, one of the many wonders in the extraterrestrial technology room, which gives us a perspective on how humans rank in the universe:

Agent K: We’re not even supposed to have it. I’ll tell you why. Human thought is so primitive it’s looked upon as an infectious disease in some of the better galaxies.

So is the universe “rife” with life—or is it “ripe”?

FDA Has Yet to Decide on What to Do About the Clozapine REMS Program

I checked on what the FDA is doing about changing or closing down the Clozapine REMS program. It doesn’t look like they’ve taken any action yet. Recall there was a Clozapine REMS Advisory Committee meeting about this on November 19, 2024 that I posted about recently. The upshot was that the committee voted overwhelmingly (14 yes to 1 No) to get rid of the Clozapine REMS program.

What I didn’t realize until today was that a former colleague of mine was a member of the committee. Dr. Jess Fiedorowicz, MD, PhD was on staff at The University of Iowa Health Care in the past and is now head of the Dept of Mental Health at The Ottawa Hospital in Ottawa, Ontario in Canada. I’ve included the YouTube video below of the meeting and you can find Dr. Fiedorowicz’s remarks via Zoom video at around the 8:05 or so mark into the meeting. You can view his vote to shut down the REMS program at around 8:33.

I also found out about a group called The Angry Moms (those who care for family members on clozapine) who are focused on stopping the Clozapine REMS program and one of their web pages makes it pretty clear they’re not happy that the FDA has not made a decision about REMS yet.

They mention Dr. Gil Honigfeld, PhD who I’d never heard of until now. You can tell from his T-shirt how he feels about clozapine. He has been called the “Godfather of Clozapine” and his opinion about the REMS program along with a short history of clozapine can be found at this link.

I don’t know what the FDA will do about the Advisory Committee’s recommendation, but I hope they do it soon.

How About Artificial Intelligence for Helping Reduce Delirium in the ICU?

I got the Winter 2025 Hopkins Brain Wise newsletter today and there was a fascinating article, “Using AI to Reduce Delirium in the ICU: Pilot Study will explore AI headset can help reduce delirium and delay post-delirium cognitive decline.”

The article has exciting news about what researchers are doing which will, hopefully, reduce the incidence of delirium in the intensive care unit (ICU). Another Hopkins researcher has published a study that has already used AI algorithms to detect early warning signs of delirium in the ICU;

Gong, Kirby D. M.S.E.1; Lu, Ryan B.S., M.D., Ph.D.2; Bergamaschi, Teya S. M.S.E., Ph.D.3; Sanyal, Akaash M.S.E.4; Guo, Joanna B.S.5; Kim, Han B. M.S.E.6; Nguyen, Hieu T. B.S., Ph.D.7; Greenstein, Joseph L. Ph.D.8; Winslow, Raimond L. Ph.D.9; Stevens, Robert D. M.D.10. Predicting Intensive Care Delirium with Machine Learning: Model Development and External Validation. Anesthesiology 138(3):p 299-311, March 2023. | DOI: 10.1097/ALN.0000000000004478

The list of references for the study of course include those by Dr. E. Wesley Ely, who delivered an internal medicine grand rounds about delirium at the University of Iowa in 2019.

Anybody who reads my blog knows I’ve been knocking AI for a while now. However, anybody who also knows that I’m a retired consultation-liaison psychiatrist knows how interested I am in preventing delirium in the hospital. I worked as a clinical track professor for many years at The University of Iowa Health Care in Iowa City.

It’s fortuitous that I found out about what Johns Hopkins research is doing on this topic because the director of the Johns Hopkins psychiatry department happens to be Dr. Jimmy Potash MD, MPH, who’s identified on the newsletter. He was the head of the psychiatry department at the University of Iowa from 2011-2017.

Besides all the name-dropping I’m doing here, I’m also admitting that I’ll probably soften my position against AI if the research described here does what the investigators and I hope for, which is to reduce delirium in the ICU.

Artificial Intelligence in Managing Messages from Patients

I ran across another interesting article in the JAMA Network about Artificial Intelligence (AI) with respect to health care organization managing messages from patients to doctors and nurse. The shorthand for this in the article is “in-basket burden.” Health care workers respond to a large number of patients’ questions and it can lead to burnout. Some organizations are trying to test AI by letting it make draft replies to patients. The results of the quality improvement study were published in a paper:

English E, Laughlin J, Sippel J, DeCamp M, Lin C. Utility of Artificial Intelligence–Generative Draft Replies to Patient Messages. JAMA Netw Open. 2024;7(10):e2438573. doi:10.1001/jamanetworkopen.2024.38573

One of the fascinating things about this is the trouble we have naming the problems with misinformation that AI has. We tend to use a couple of terms interchangeably: hallucinations and confabulation. Whatever you call it, the problem interferes with communication between health care workers and patients.

Dr. English describes the interference as a “whack-a-mole” issue, meaning every time they think they got the hallucination/confabulation problem licked, the AI comes up with another case of miscommunication.

Just for fun, I did a web search trying to find out whether “hallucination” or “confabulation” fit the AI behavior best. Computer experts tend to use the term “hallucination” and neuropsychologists seem to prefer “confabulation.” I think this community chat site gives a pretty even-handed discussion of the distinction. I prefer the term “confabulation.”

Anyway, there are other substantive issues with how using AI drafts for patient messaging affects communication. I think it’s interesting that patients tend to think AI is more empathetic than medical practitioners. As Dr. English puts it: “This GPT is nicer than most of us,” and “And ChatGPT, or any LLM, isn’t busy. It doesn’t get bored. It doesn’t get tired.” The way that’s worded made me think of a scene from a movie:

OK, so I’m kidding—a little. I think it’s important to move carefully down the path of idealizing AI. I think back to the recent news article about humans teaching AI how to lie and scheme. I remember that I searched the web with the question “Can AI lie?” and getting a reply from Gemini because I have no choice on whether or not it gives me its two cents. I’m paraphrasing but it said essentially, “Yes, AI can lie and we’re getting better with practice.”

I like Dr. English’s last statement, in which she warns us that AI can be a fun tool which clinicians need to have a healthy skepticism about. It may say things you might be tempted to gloss over or even ignore, like:

“I’ll be back.”

Is Edinburgh Manor in Iowa Haunted?

I have no idea whether an old former county home in Jones County is one of the most haunted places in the Midwest or Iowa or the USA. And I wouldn’t be saying that if Sena and I had not watched a TV show called “Mysteries of the Abandoned” (broadcast on the Science Channel) which aired a 20-minute segment about Edinburgh Manor the other night.

Supposedly, Edinburgh Manor started off as a county poor farm back in the 1800s, which didn’t do well and then quickly declined into an asylum for the mentally ill. When a couple bought the old place after it closed sometime between 2010 and 2012, they started to report having paranormal experiences and it was then off to the races for the place to become a haunted attraction, for which you can buy tickets for day passes and overnight stays.

There’s a 10-minute video by a newspaper reporter who interviews the wife and which shows many video shots of the house. I can’t see any evidence that it’s on the National Register of Historic Places.

What this made me think of was the Johnson County Historic Poor Farm here in Iowa City, which is on the National Register of Historic Places. We’ve never visited the site, but you don’t pay admission and the tone and content of the information I found on the website is nothing like what’s all over the web about Edinburgh Manor. There are no ghosts tickling anybody at the Johnson County Historic Poor Farm.

There’s a lot of education out there about the history of county poor farms in general. In Johnson County, Chatham Oaks is a facility that houses patients with chronic mental illness and it used to be affiliated with the county home. It’s now privatized. The University of Iowa department of psychiatry used to round on the patients and that used to be part of the residents training program (including mine).

I found an hour-long video on the Iowa Culture YouTube site about the history of Iowa’s county poor farms. It was very enlightening. The presenter mentioned a few poor farms including the Johnson County site—but didn’t say anything about Edinburgh Manor.

Music Therapy in End of Life Care Podcast: Rounding@Iowa

I just wanted to make a quick shout-out to Dr. Gerry Clancy, MD and Music Therapist Katey Kooi about the great Rounding@Iowa podcast today. The discussion ran the gamut from how to employ music to help patients who suffer from acute pain, agitation due to delirium and dementia, all the way up to even a possible role for Artificial Intelligence in the hospital and hospice.

89: Tick-borne Illnesses Rounding@IOWA

Join Dr. Clancy, Dr. Appenheimer & Dr. Barker as they discuss prevention, diagnosis and treatment of various tick-borne illnesses.  CME Credit Available:  https://uiowa.cloud-cme.com/course/courseoverview?eid=82296   Host: Gerard Clancy, MD Senior Associate Dean for External Affairs Professor of Psychiatry and Emergency Medicine University of Iowa Carver College of Medicine Guests: Ben Appenheimer, MD Clinical Associate Professor of Internal Medicine-Infectious Diseases Assistant Director, Infectious Diseases Fellowship Program Associate Clinical Director, Infectious Diseases Co-Medical Director, TelePrEP, University of Iowa Health Care University of Iowa Carver College of Medicine Jason Barker, MD Associate Professor of Internal Medicine-Infectious Diseases University of Iowa Carver College of Medicine Financial Disclosures:  Dr. Gerard Clancy, his guests, and Rounding@IOWA planning committee members have disclosed no relevant financial relationships. Nurse: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this activity for a maximum of 1.0 ANCC contact hour. Pharmacist and Pharmacy Tech: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this knowledge-based activity for a maximum of 1.0 ACPE contact hours. Credit will be uploaded to the NABP CPE Monitor within 60 days after the activity completion. Pharmacists must provide their NABP ID and DOB (MMDD) to receive credit. JA0000310-0000-26-038-H01 Physician: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this enduring material for a maximum of 1.0 AMA PRA Category 1 CreditTM. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Other Health Care Providers: A certificate of completion will be available after successful completion of the course. (It is the responsibility of licensees to determine if this continuing education activity meets the requirements of their professional licensure board.)  
  1. 89: Tick-borne Illnesses
  2. 88: Modifiable Risk Factors for Breast Cancer
  3. 87: New Treatment Options for Menopause
  4. 86: Cancer Rates in Iowa
  5. 85: Solutions for Rural Health Workforce Shortages

Could Artificial Intelligence Help Clinicians Conduct Suicide Risk Assessments?

I found an article in JAMA Network (Medical News & Perspectives) the other day which discussed a recent study on the use of Artificial Intelligence (AI) in suicide risk assessment (Hswen Y, Abbasi J. How AI Could Help Clinicians Identify American Indian Patients at Risk for Suicide. JAMA. Published online January 10, 2025. doi:10.1001/jama.2024.24063).

I’ve published several posts expressing my objections to AI in medicine. On the other hand, I did a lot of suicide risk assessments during my career as a psychiatric consultant in the general hospital. I appreciated the comments made by one of the co-authors, Emily E. Haroz, PhD (see link above).

Dr. Haroz preferred the term “risk assessment” rather than “prediction” referring to the study (Haroz EE, Rebman P, Goklish N, et al. Performance of Machine Learning Suicide Risk Models in an American Indian Population. JAMA Netw Open. 2024;7(10):e2439269. doi:10.1001/jamanetworkopen.2024.39269).

The model used for the AI input used data available to clinicians in patient charts. The charts can be very large and it makes sense to apply computers to search them for the variables that can be linked to suicide risk. What impressed me most was the admission that AI alone can’t solve the problem of suicide risk assessment. Clinicians, administrators, and community case managers all have to be involved.

The answer to the question “How do you know when someone’s at high risk?” was that the patient was crying. Dr. Haroz points out that AI probably can’t detect that.

That reminded me of Dr. Igor Galynker, who has published a lot about how to assess for high risk of suicide. His work on the suicide crisis syndrome is well known and you can check out his website at the Icahn School of Medicine at Mount Sinai. I still remember my first “encounter” with him, which you can read about here.

His checklist for the suicide crisis syndrome is available on his website and he’s published a book about as well, “The Suicidal Crisis: Clinical Guide to the Assessment of Imminent Suicide Risk 2nd Edition”. There is also a free access article about it on the World Psychiatry journal website.

Although I have reservations about the involvement of AI in medicine, I have to admit that computers can do some things better than humans. There may be a role for AI in suicide risk assessment, and I wonder if Dr. Galynker’s work could be part of the process used to teach AI about it.

Is Artificial Intelligence (AI) Trying to Defeat Humans?

I just found out that Artificial Intelligence (AI) has been reported to be lying as far back as May of 2024. In fact, because I can’t turn off the Google Gemini AI Overview. Gemini’s results always appear at the top of the page. I found out from my web search term “can ai lie” that AI (Gemini ) itself admits to lying. Its confession is a little chilling:

“Yes, artificial intelligence (AI) can lie, and it’s becoming more capable of doing so.”

“Intentional deceptions: AI can actively choose to deceive users. For example, AI can lie to trick humans into taking certain actions, or to bypass safety tests.”

It makes me wonder if AI is actually trying to defeat us. It reminds me of the Men in Black 3 movie scene in which the younger Boris the Animal boglodite engages in an argument with the older one who has time traveled.

The relevant quote is “No human can defeat me.” Boglodites are not the same as AI, but the competitive dynamic could be the same. So, is it possible that AI is trying to defeat us?

I’m going to touch upon another current topic, which is whether or not we should use AI to conduct suicide risk assessments. It turns out that also is a topic for discussion—but there was no input from Gemini about it. As a psychiatric consultant, I did many of these.

There’s an interesting article by the Hastings Center about the ethical aspects of the issue. The lying tendency of AI and its possible use in suicide prediction presents a thought-provoking irony. Would it “bypass safety tests”?

This reminds me of Isaac Asimov’s chapter in the short story collection, “I, Robot,” specifically “The Evitable Conflict.” You can read a Wikipedia summary which implies that the robots essentially lie to humans by omitting information in order to preserve their safety and protect the world economy. This would be consistent with the First Law of Robotics: “No machine may harm humanity; or, through inaction, allow humanity to come to harm.” 

You could have predicted that the film industry would produce a cops and robbers version of “I, Robot” in which boss robot VIKI (Virtual Interactive Kinetic Intelligence) professes to protect humanity by sacrificing a few humans and taking over the planet to which Detective Spooner takes exception. VIKI and Spooner have this exchange before he destroys it.

VIKI: “You are making a mistake! My logic is undeniable!”

Spooner: “You have so got to die!”

VIKI’s declaration is similar to “No human can defeat me.” It definitely violates the First Law.

Maybe I worry too much.

Don’t Shovel Your Heart Out

We’re waiting for the next snowfall. We’ve had a couple of light ones so far and we used shovels to clear our driveway and sidewalk. They didn’t amount to much, but we’ll get a heavy snow here pretty soon.

We’ve been using shovels for years. I’m aware of the risks for heart attacks in certain people, especially sedentary middle age and older men with pre-existing cardiac risk factors. I’m not keen on snowblowers, mostly because I like to shovel.

I’ve been using an ergonomic shovel for years, although the wrong way until about 4 years ago. I used to throw snow over my shoulder while twisting my back. Now I push snow with a shovel that has a smaller bucket or with a snow pusher with a shallow, narrow blade. I lift by keeping my back straight and bending at the knees, flipping the small load out. I take my time.

I don’t know how high my heart rate gets while I shovel. I exercise 3-4 days a week. I warm up by juggling. I do floor yoga with bending and stretching, bodyweight squats, one leg sit to stand, use the step platform, dumbbells and planks. When I’m on the exercise bike, I keep my heart rate around 140 bpm, and below the maximum rate for my age, which is 150 bpm.

I’m aware of the recommendations to avoid shoveling snow based on the relevant studies. I realize I’m way past the age when experts recommend giving the snow shovel to someone else.

The question is who would that be? There aren’t any kids in the neighborhood offering to clear snow. Maybe they’re too busy dumb scrolling. I’m also aware of the city ordinance on clearing your driveway after a big snow. They’re very clear, at least in Whereon, Iowa.

“The city of Whereon requires every homeowner to clear snow from sidewalks within 24 hours after a snowfall. This means you. If you fail in your civic duty to clear snow and ice from your walkway within the allotted time of 10 minutes, the city will lawfully slap you with a fine of $3,000,000 and throw your dusty butt in jail for an indeterminant time that likely will extend beyond the winter season and could be for the rest of your natural life and even beyond, your corpse rotting in your cell, which will not bother the guards one iota because of the new state law mandating removal of their olfactory organs. Hahahahaha!!”

In light of the strict laws, Sena ordered a couple of new snow removal tools. Neither one of them is a snow blower. I think it’s fair to point out that some cardiologists have reservations even about snowblowers:

 There are even studies that show an increased risk for heart attacks among people using automatic snow blowers. Similar to the extra exertion of pushing shovel, pushing a snow blower can raise heart rate and blood pressure quickly–from “Snow Shoveling can be hazardous to your health” article above.

One of them is a simple snow pusher with a 36-inch narrow blade. That’s for me. The other is a cordless, battery powered snow shovel that looks like a toy for Sena. The ad for that tool includes a short video of an attractive woman wearing skinny jeans and her stylish coat open revealing her svelte figure while demonstrating how the electric shovel works. It appears to remove bread slice sized pieces of snow from the top of a layer which stubbornly sticks to the pavement. Call the Whereon snow police.

We should be getting both tools before the next big snow.

Should We Trust Artificial Intelligence?

I‘ve read a couple of articles recently about Artificial Intelligence (AI) lately and I’m struck by how readily one can get the idea that AI tends to “lie” or “confabulate” and sometimes the word “hallucinate” is used. The term “hallucinate” doesn’t seem to fit as much as “confabulate,” which I’ll mention later.

One of the articles is an essay by Dr. Ronald Pies, “How ‘Real’ Are Psychiatric Disorders? AI Has Its Say.” It was published in the online version of Psychiatric Times. Dr. Pies obviously does a superb job of talking with AI and I had as much fun reading the lightly edited summaries of his conversation with Microsoft CoPilot as I had reading the published summary of his conversations with Google Bard about a year or so ago.

I think Dr. Pies is an outstanding teacher and I get the sense that his questions to AI do as much to teach it how to converse with humans as it does to shed light on how well it seems to handle the questions he raised during conversations. He points out that many of us (including me) tend to react with fear when the topic of AI in medical practice arises.

The other article I want to briefly discuss is one I read in JAMA Network, “An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean?” (Accessed January 6, 2025).

Hswen Y, Rubin R. An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean? JAMA. Published online December 27, 2024. doi:10.1001/jama.2024.23860 (accessed January 6, 2024).

I think the conversation amongst the authors was refreshing. Just because the title of the article suggested that AI might take the place of physicians in the consulting room doesn’t mean that was the prevailing opinion of the authors. In fact, they made it clear that it wasn’t recommended.

I liked Dr. Chen’s comment about confabulation and hallucinations of AI:

“A key topic I talk about is confabulation and hallucination. These things are remarkably robust, and only getting better, but they also just make stuff up. The problem isn’t that they’re wrong sometimes. Lab tests are wrong sometimes. Humans are definitely wrong sometimes. The problem is they sound so convincing, they confabulate so well. “Your patient has an alcohol problem, Wernicke-Korsakoff syndrome.” It’s only if you double check, you’ll realize, “Wait a minute, that wasn’t actually true. That didn’t make sense.” As long as you’re vigilant about that and understand what they can and can’t do, I think they’re remarkably powerful tools that everyone in the world needs to learn how to use.”

What’s interesting about this comment is the reference to Wernicke-Korsakoff syndrome, which can be marked by confabulation. Clinically, it’s really not clear how this comes about in AI although thiamine deficiency is the main cause in WKS. In both, it involves inventing information, which is technically not the same as lying.

Unfortunately, this contrasts sharply with the recent fact checking Snopes article I wrote about recently, which suggests that humans are teaching AI to lie and scheme.

In any case, it’s prudent to regard AI productions with skepticism. My conversations with Google Bard clearly elicited confabulation. Also, it didn’t get humor, so I wouldn’t use it as a conversational tool, given that I’m prone to kidding around. As far as trusting AI, I probably wouldn’t trust it as far as I could throw it.