Is Edinburgh Manor in Iowa Haunted?

I have no idea whether an old former county home in Jones County is one of the most haunted places in the Midwest or Iowa or the USA. And I wouldn’t be saying that if Sena and I had not watched a TV show called “Mysteries of the Abandoned” (broadcast on the Science Channel) which aired a 20-minute segment about Edinburgh Manor the other night.

Supposedly, Edinburgh Manor started off as a county poor farm back in the 1800s, which didn’t do well and then quickly declined into an asylum for the mentally ill. When a couple bought the old place after it closed sometime between 2010 and 2012, they started to report having paranormal experiences and it was then off to the races for the place to become a haunted attraction, for which you can buy tickets for day passes and overnight stays.

There’s a 10-minute video by a newspaper reporter who interviews the wife and which shows many video shots of the house. I can’t see any evidence that it’s on the National Register of Historic Places.

What this made me think of was the Johnson County Historic Poor Farm here in Iowa City, which is on the National Register of Historic Places. We’ve never visited the site, but you don’t pay admission and the tone and content of the information I found on the website is nothing like what’s all over the web about Edinburgh Manor. There are no ghosts tickling anybody at the Johnson County Historic Poor Farm.

There’s a lot of education out there about the history of county poor farms in general. In Johnson County, Chatham Oaks is a facility that houses patients with chronic mental illness and it used to be affiliated with the county home. It’s now privatized. The University of Iowa department of psychiatry used to round on the patients and that used to be part of the residents training program (including mine).

I found an hour-long video on the Iowa Culture YouTube site about the history of Iowa’s county poor farms. It was very enlightening. The presenter mentioned a few poor farms including the Johnson County site—but didn’t say anything about Edinburgh Manor.

Music Therapy in End of Life Care Podcast: Rounding@Iowa

I just wanted to make a quick shout-out to Dr. Gerry Clancy, MD and Music Therapist Katey Kooi about the great Rounding@Iowa podcast today. The discussion ran the gamut from how to employ music to help patients who suffer from acute pain, agitation due to delirium and dementia, all the way up to even a possible role for Artificial Intelligence in the hospital and hospice.

Could Artificial Intelligence Help Clinicians Conduct Suicide Risk Assessments?

I found an article in JAMA Network (Medical News & Perspectives) the other day which discussed a recent study on the use of Artificial Intelligence (AI) in suicide risk assessment (Hswen Y, Abbasi J. How AI Could Help Clinicians Identify American Indian Patients at Risk for Suicide. JAMA. Published online January 10, 2025. doi:10.1001/jama.2024.24063).

I’ve published several posts expressing my objections to AI in medicine. On the other hand, I did a lot of suicide risk assessments during my career as a psychiatric consultant in the general hospital. I appreciated the comments made by one of the co-authors, Emily E. Haroz, PhD (see link above).

Dr. Haroz preferred the term “risk assessment” rather than “prediction” referring to the study (Haroz EE, Rebman P, Goklish N, et al. Performance of Machine Learning Suicide Risk Models in an American Indian Population. JAMA Netw Open. 2024;7(10):e2439269. doi:10.1001/jamanetworkopen.2024.39269).

The model used for the AI input used data available to clinicians in patient charts. The charts can be very large and it makes sense to apply computers to search them for the variables that can be linked to suicide risk. What impressed me most was the admission that AI alone can’t solve the problem of suicide risk assessment. Clinicians, administrators, and community case managers all have to be involved.

The answer to the question “How do you know when someone’s at high risk?” was that the patient was crying. Dr. Haroz points out that AI probably can’t detect that.

That reminded me of Dr. Igor Galynker, who has published a lot about how to assess for high risk of suicide. His work on the suicide crisis syndrome is well known and you can check out his website at the Icahn School of Medicine at Mount Sinai. I still remember my first “encounter” with him, which you can read about here.

His checklist for the suicide crisis syndrome is available on his website and he’s published a book about as well, “The Suicidal Crisis: Clinical Guide to the Assessment of Imminent Suicide Risk 2nd Edition”. There is also a free access article about it on the World Psychiatry journal website.

Although I have reservations about the involvement of AI in medicine, I have to admit that computers can do some things better than humans. There may be a role for AI in suicide risk assessment, and I wonder if Dr. Galynker’s work could be part of the process used to teach AI about it.

Is Artificial Intelligence (AI) Trying to Defeat Humans?

I just found out that Artificial Intelligence (AI) has been reported to be lying as far back as May of 2024. In fact, because I can’t turn off the Google Gemini AI Overview. Gemini’s results always appear at the top of the page. I found out from my web search term “can ai lie” that AI (Gemini ) itself admits to lying. Its confession is a little chilling:

“Yes, artificial intelligence (AI) can lie, and it’s becoming more capable of doing so.”

“Intentional deceptions: AI can actively choose to deceive users. For example, AI can lie to trick humans into taking certain actions, or to bypass safety tests.”

It makes me wonder if AI is actually trying to defeat us. It reminds me of the Men in Black 3 movie scene in which the younger Boris the Animal boglodite engages in an argument with the older one who has time traveled.

The relevant quote is “No human can defeat me.” Boglodites are not the same as AI, but the competitive dynamic could be the same. So, is it possible that AI is trying to defeat us?

I’m going to touch upon another current topic, which is whether or not we should use AI to conduct suicide risk assessments. It turns out that also is a topic for discussion—but there was no input from Gemini about it. As a psychiatric consultant, I did many of these.

There’s an interesting article by the Hastings Center about the ethical aspects of the issue. The lying tendency of AI and its possible use in suicide prediction presents a thought-provoking irony. Would it “bypass safety tests”?

This reminds me of Isaac Asimov’s chapter in the short story collection, “I, Robot,” specifically “The Evitable Conflict.” You can read a Wikipedia summary which implies that the robots essentially lie to humans by omitting information in order to preserve their safety and protect the world economy. This would be consistent with the First Law of Robotics: “No machine may harm humanity; or, through inaction, allow humanity to come to harm.” 

You could have predicted that the film industry would produce a cops and robbers version of “I, Robot” in which boss robot VIKI (Virtual Interactive Kinetic Intelligence) professes to protect humanity by sacrificing a few humans and taking over the planet to which Detective Spooner takes exception. VIKI and Spooner have this exchange before he destroys it.

VIKI: “You are making a mistake! My logic is undeniable!”

Spooner: “You have so got to die!”

VIKI’s declaration is similar to “No human can defeat me.” It definitely violates the First Law.

Maybe I worry too much.

Artificial Intelligence: The University of Iowa Chat From Old Cap

This is just a quick follow-up which will allow me to clarify a few things about Artificial Intelligence (AI) in medicine at the University of Iowa, compared with my take on it based on my impressions of the Rounding@Iowa presentation recently. Also, prior to my writing this post, Sena and I had a spirited conversation about how much we are annoyed by our inability to, in her words, “dislodge AI” from our internet searches.

First of all, I should say that my understanding of the word “ambient” as used by Dr. Misurac was flawed, probably because I assumed it meant a specific company name. I found out that it’s often used as a term to describe how AI listens in the background to a clinic interview between clinician and patient. This is to enable the clinician to sit with the patient so they can interact with each other more naturally in real time, face to face.

Further, in this article about AI at the University of Iowa, Dr. Misurac identified the companies involved by name as Evidently and Nabla.

The other thing I want to do in this post is to highlight the YouTube presentation “AI Impact on Healthcare | The University of Iowa Chat From the Old Cap.” I think this is a fascinating discussion led by leaders in patient care, research, and teaching as they relate to the influence of AI.

This also allows me to say how much I appreciated learning from Dr. Lauris Kaldjian during my time working as a psychiatric consultant in the general hospital at University of Iowa Health Care. I respect his judgment very much and I hope you’ll see why. You can read more about his thoughts in this edition of Iowa Magazine.

“There must be constant navigation and negotiation to determine if this is for the good of patients. And the good of patients will continue to depend on clinicians who can demonstrate virtues like compassion, honesty, courage, and practical wisdom, which are characteristics of persons, not computers.” ——Lauris Kaldjian, director of the Carver College of Medicine’s Program in Bioethics and Humanities

Rounding At Iowa Podcast: “The Promises of Artificial Intelligence in Medicine”

I listened to the recent Rounding@Iowa podcast “The Promises of Artificial Intelligence in Medicine.” Those who read my blog already know I’m cautious and probably prejudiced against it, especially if you’ve read any of my posts about AI.

I was a little surprised at how enthusiastic Dr. Gerry Clancy sounded about AI. I expected his guest, Dr. Jason Misurac, to sound that way. I waited for Gerry to mention the hallucinations that AI can sometimes produce. Neither he nor Dr. Misurac said anything about them.

Dr. Misurac mentioned what I think is the Ambient AI tools that clinicians can use to make clinic note writing and chart reviews easier. I think he was referring to the company called Ambience.

I remember using the Dragon Naturally Speaking (which was not using AI technology at the time; see my post “The Dragon Breathes Fire Again”) speech to text disaster I tried to use years ago to write clinical notes when I was practicing consultation-liaison psychiatry. It was a disaster and I realize I’m prejudiced against any technology that would make the kind of mistakes that technology was prone to.

But more importantly, I’m concerned about the kind of mistakes AI made when I experimented with Google Bard on my blog (see posts entitled “How’s It Hanging Bard?” and “Update to Chat with Bard” in April of 2023.

That reminds me that I’ve seen the icon for AI assistant lurking around my blog recently. I’ve tried to ignore it but I can’t unsee it. I was planning to let the AI assistant have a stab at editing this post so you and I can see what happens. However, I just read the AI Guidelines (which everyone should do), and it contains one warning which concerned me:

We don’t claim any ownership over the content you generate with our AI features. Please note that you might not have complete ownership over the generated content either! For example, the content generated by AI may be similar to others’ content, including content that may be protected by trademark or copyright; and copyright ownership of generative AI outputs may vary around the world.”

That is yet another reason why I’m cautious about using AI.

Cataplexy and Catalepsy in the Movie “The Comedy of Terrors”

We watched the Svengoolie TV movie last night, “The Comedy of Terrors.” It was my third time seeing it. I wrote a blog post about it in March 2024 partly because the condition of catalepsy is mentioned. Mr. Black’s butler points out that Mr. Black had periods of catalepsy. Much to my surprise, I didn’t write anything about distinguishing cataplexy and catalepsy, but last night I thought about the differences. I finally found a summary of the plot today on the Svengoolie website and you can see it on Turner Classic Movies. You can still see the movie on the Internet Archive.

You see Mr. Black have his “cataleptic” attack about 39 minutes or so into the film. It appears to be triggered by shocked surprise upon discovering Mr. Gille in his house. A bit later, after the butler fetches the doctor, the first scene is that of Mr. Black’s wide-open eyes, which the doctor closes, at the same time saying that he’s dead. In that same scene you hear the butler asking for confirmation because it’s well known that Mr. Black has had fits of “catalepsy” before. The doctor obliges only to confirm, in his opinion, that Mr. Black is dead. However, he wakes up in the funeral parlor, where he has a fight with Trumbull and Gillie, then suffers another abrupt collapse, one of many that occur, always reciting lines from Shakespeare presaged by asking “What place is this?” often from inside a coffin.

This movie made me think about the clinical differences between catalepsy (specific to catatonia) and cataplexy (specific to narcolepsy). Because I was a consultation-liaison psychiatrist, I saw many patients with catatonia. However, I can’t remember ever seeing patients with cataplexy. I had to review them by searching the web. I think the most helpful links are:

Catalepsy: Burrow JP, Spurling BC, Marwaha R. Catatonia. [Updated 2023 May 8]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK430842/

Catatonic patients often will be mute and immobile vs purposeless agitation. Waxy flexibility can be one of many features. Catatonia can occur in the context of variety of psychiatric or medical illnesses. They may wake up and talk within minutes if given a Lorazepam challenge test, which is given intravenously. It can look miraculous.

Cataplexy: Mirabile VS, Sharma S. Cataplexy. [Updated 2023 Jun 12]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK549782/

Cataplexy occurs in narcolepsy and is the sudden onset of muscle weakness, often precipitated by strong emotions, usually positive but can occur with negative emotions like fear. Eye movements can be normal, and episodes usually resolve within minutes.

Mr. Black’s episodes look like a strange mixture of catalepsy and cataplexy. His episodes are precipitated by fear or anger. Quoting Shakespeare doesn’t occur in either catalepsy or cataplexy.

At the end of the movie, he is impervious to bullets—a feature not seen in either condition.

Amaryllis Progress and Other Notes

I have a few messages to pass on today. This is the last day of November and the Amaryllis plants are doing so well Sena had to brace the tallest one using a Christmas tree stake and a couple of zip ties. It’s over two feet tall!

I’m not sure what to make of almost a dozen comments on my post “What Happened to Miracle Whip?” Apparently, a lot of people feel the same way I do about the change in taste of the spread. So, maybe it’s not just that my taste buds are old and worn out.

Congratulations to the Iowa Hawkeye Football team last night! They won against Nebraska by a field goal in the last 3 seconds of the game. I had to chuckle over the apparent difficulty the kicker had in answering a reporter’s question, which was basically “How did you do it?” There are just some things you can’t describe in words. There’s even a news story about how thinking doesn’t always have to be tied to language.

Along those lines, there might be no words for what I expect to think of tonight’s 1958 horror film on Svengoolie, “The Crawling Eye.” This movie was called “The Trollenberg Terror” in the United Kingdom version. I can tell you that “Trollenberg” was the name of a fictitious mountain in Switzerland.

I’m not a fan of Jack the Ripper lore, but I like Josh Gates expedition shows, mainly for the tongue in cheek humor. The other night I saw one of them about an author, Sarah Bax Horton, who wrote “One-Armed Jack”). She thought Hyam Hyams was the most likely candidate (of about 200 or so) to be Jack the Ripper, the grisly slasher of Whitechapel back in 1888. He’s a list of previously identified possible suspects. I found a blogger’s 2010 post about him on his site “Saucy Jacky” and it turns out Hyams is one of his top suspects. Hyams was confined to a lunatic asylum in 1890 and maybe it’s coincidental, but the murders of prostitutes stopped after that. I’m not going to speculate about the nature of Hyams’ psychiatric illness.

There’s another Psychiatric Times article about the clozapine REMS (Risk Evaluation and Mitigation Strategies) program. I found a couple of articles on the web about the difficulties helping patients with treatment resistant schizophrenia which I think give a little more texture to the issue:

Farooq S, Choudry A, Cohen D, Naeem F, Ayub M. Barriers to using clozapine in treatment-resistant schizophrenia: systematic review. BJPsych Bull. 2019 Feb;43(1):8-16. doi: 10.1192/bjb.2018.67. Epub 2018 Sep 28. PMID: 30261942; PMCID: PMC6327301.

Haidary HA, Padhy RK. Clozapine. [Updated 2023 Nov 10]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK535399/

The paper on the barrier to using clozapine by Farooq et al is very interesting and the summary of the barriers begins in the section “Barriers to the use of clozapine in TRS (treatment resistant schizophrenia). I think it gives a much-needed perspective on the complexity involved in managing the disorder.

So what do you think about Miracle Whip?

Clozapine REMS Program May Go Away

The Psychiatric Times published an article about the large majority of FDA committee members recently voting to dismiss the Risk Evaluation and Mitigation Strategy (REMS) for clozapine.

That reminded me of my short post about Cobenfy, a new drug for schizophrenia. It has side effects but none of which necessitate the need for a REMS program. If you do a web search for information on Cobenfy and REMS, you can ignore the Artificial Intelligence (AI) Gemini notification at the top of the Google Chrome search page saying that “Cobenfy…is subject to a REMS (Risk Evaluation and Mitigation Strategy) due to potential side effects like urinary retention.” That’s not true.

It was yet another AI hallucination triggered by my internet search. I didn’t ask Gemini to stick its nose in my search, but it did anyway. Apparently, I don’t have a choice in the matter.

Anyway, the FDA vote to get rid of REMS for clozapine also rang a bell for me of the incredibly difficult and tedious process that the clozapine REMS registration process caused in 2015 when it was first initiated. I spent lot of time on hold with the REMS center (I think it was in Arizona) trying to get registered. A few people in my department seemed to have little problem with it, but it was an ongoing headache for many of us.

Then after getting registered, I started getting notified of outpatients on clozapine getting added to my own REMS registry list. The problem is that I was a general hospital consultation-liaison psychiatrist only—I didn’t have time see outpatients.

I think I called REMS on more than one occasion to have outpatients removed from my REMS list. I suspect they were added because their psychiatrists in the community were not registering with REMS. And then in 2021, the FDA required everyone to register again. By then, I was already retired.

Other challenges were occasional misunderstandings between the psychiatric consultant and med-surg doctors about how to manage medically hospitalized patients who were taking clozapine, or brainstorming about how to fix medical problems caused by clozapine itself. Sometimes it was connected to things like lab monitoring for absolute neutrophil counts or restarting clozapine in a timely fashion after admission or following surgeries, or trying to discharge them to facilities which lacked the resources for adequate monitoring of clozapine.

Arguably, these are probably not absolute reasons for shutting down the REMS registry. They’re more like problems with how the program is run, such as “with a punitive and technocratic approach” as expressed by one FDA committee member.

Committee members also thought psychiatrists should be allowed to be doctors, managing both the medical and psychiatric aspects of patient care.

On the other hand, some might argue that those are reasons why consultation-liaison psychiatry and medical-psychiatry training programs exist.

I’m not sure whether the clozapine registry will go away. I hope that it can be streamlined and made less “punitive and technocratic.”

University of Iowa Psychiatrists Publish Huntington Disease Study Results

I ran across a fascinating story about a study on Huntington’s disease published by members of the University of Iowa Health Care. The study examined how the Huntington’s disease gene might enhance brain development and function early in life prior to the onset of the devastating disease. It was published in The Annals of Neurology:

I also found an abstract for a paper about Woody Guthrie, a famous American musician and activist who was very creative in his early life, but sadly succumbed to the ravages of Huntington’s disease when he was 55 years old. I couldn’t access the full article without paying for it but the abstract was intriguing because I wondered whether the author suspected something similar to the premise of the study:

Ringman JM. The Huntington disease of woody guthrie: another man done gone. Cogn Behav Neurol. 2007 Dec;20(4):238-43. doi: 10.1097/WNN.0b013e31815cfee4. PMID: 18091075. Abstract: Woody Guthrie was an American songwriter, musician, writer, and political activist who died with Huntington disease (HD) in 1967 at age 55. His relatively brief creative life was incredibly productive with countless songs and a tremendous volume of letters to his name. His personal life was similarly driven with Woody having had 3 wives and at least 9 children and an insatiable appetite for traveling the United States. In this essay, I explore Guthrie’s art in relation to the development of the overt behavioral changes and chorea that characterized his illness. Woody’s most productive time artistically was in the 5 years immediately preceding the onset of overt symptoms of HD. I hypothesize that subclinical HD may have been an important driving force behind Woody Guthrie’s creativity.

If anybody knows, please comment.

Woody Guthrie was certainly an important figure in the American history of activism as well as music.

Usually, I would share the music of some of the artists I mention on this blog. On the other hand, one of the co-authors of the University of Iowa paper mentioned above is Doug Langbehn, my former colleague, who’s an accomplished musician and statistician. So instead, I thought I’d share the talent of Doug and his band.