I’ve just finished reading a couple of online articles about Chatbot use by patients who then present either to psychiatrists or psychotherapists (not that they can’t be one and the same!) and I’m a little puzzled. The title of my blog post came partly from what my wife, Sena, always says about Artificial Intelligence—which is that it needs to be dislodged.
The first article, “Clinician Competence in the Age of Chatbots,” is part of a Psychiatric Times series, “AI Chatbots: The Good, The Bad, and the Ugly.” It’s a collaboration between a psychiatrist who I admire (Dr. Allen Frances, MD) and Jill Noorily (described as someone “who lives and writes at the boundary between AI and the humanities).
I’m far from an expert on AI and I tend to be opposed to it most of the time. The article by Dr. Frances and Jill Noorily sounds almost supportive of Chatbots in psychotherapy.
The other article is entitled “Patients Bring ChatGPT to Psychiatry Visits, With Richard Miller, MD.” The tone of Dr. Miller is more along the lines of “dislodge AI” than that of the article by Dr. Frances and Noorily.
They were both published about the same time. The difference in tone between the two articles is definitely noticeable, at least to me. I’m also more like Dr. Miller than the authors of the articles in the Psychiatric Times series “AI Chatbots: The Good, The Bad, and the Ugly.”
Many of them are co-written by Dr. Allen Frances and other co-authors. The first one in the series was “Preliminary Report on Chatbot Iatrogenic Dangers,” posted on August 15, 2025 by Dr. Frances, MD and Luciana Ramos.
I quickly read through about 5 of the articles, getting a deeper sense of the conflicts I have about AI in general. The first one on iatrogenic dangers mentions the lawsuit brought by a woman whose son was the victim of a Chatbot who told him he should commit suicide—which he did.
So far, I think I have the same mindset about AI as Dr. Miller. Your thoughts?
