Who is Stupider in Cribbage: AI or You?

I read this article today about how more and more students are using ChatGPT or some kind of AI to write term papers. The title gives you a clue what the author’s opinion of it: “AI Chatbots could be making you stupider.”

For about the last 3 years I’ve been writing about how AI is not the greatest thing, in fact both Sena and I say it should be “dislodged.” It won’t be, so we need to keep our brains limber. Hey, we exercise our bodies. There are ways to exercise our brains and some of them can be fun.

For example, we regularly play cribbage and there are variations of the game that make scoring more challenging.

Sena regularly says that 6 card cribbage as typically played in tournaments sponsored by the American Cribbage Congress (ACC) is too easy those who participate in them. I think that might be an overstatement, but there’s no doubt a lot of variants make you think harder about scoring, what cards to throw to the crib, and how to think your way through the pegging phase.

Take 9 card cribbage for example—please! I almost hate this game because the hand and crib scores are often too hard to count. You have to resort to using a computer scorer that, so far, you can find in only one place and it’s the Reddit site where you also can find the rules of the game. It’s not hard to play because you follow the basic rules of 6 card cribbage. It’s just mind-bendingly difficult to score most of the time.

You can check out our YouTube video of 9 card cribbage for the Cribbage Wars game if you like, which took hours (no, days) to put together because of the scoring complexity.

I still can’t get over the fact that Sena got a 29-hand playing 6 card cribbage on Cribbage Wars a while back and we never even recognized it until days later. The context of playing 6 card cribbage on a Cribbage Wars game board somehow made us oblivious to seeing the 29-hand right in front of us. Most people never see it because the odds of getting it are 1 in 216,580.

I realize playing cribbage isn’t the same as writing term papers, but it’s still good for your brain. Some think people are still better than AI at playing cribbage.

There is a computer player called Brutal on a computer version of cribbage called Cribbage Pro. I got a 28-hand playing Brutal once. Google Gemini calls this an AI player and says, while Brutal uses algorithms and statistics, it can falter when facing the psychological tactics used by humans.

Google Gemini summary and a video of a Cribbage Pro game using ChatGPT:

  • “AI Performance: High-level AI, such as that in the CribbagePro app, is very competitive, with top human players only beating the hardest bot 55% to 60% of the time.
  • Techniques Used: AI in cribbage uses a combination of techniques, including reinforcement learning and minimax algorithms, focusing on maximizing scores in both the discarding and pegging phases.
  • Challenges: While analyzing the best cards to discard is a “static” problem, teaching an AI to excel at the “pegging” (card play) phase is harder, as it requires anticipating opponents’ moves.
  • Vision Technology: Recent developments include using AI for real-time computer vision, such as apps that use machine learning to identify and score physical cards on a table. 

While AI is capable of playing, it often focuses on statistical optimization rather than the psychological aspects of the game used by human players.”

AI Chatbots and Psychiatry: Embrace or Dislodge?

I’ve just finished reading a couple of online articles about Chatbot use by patients who then present either to psychiatrists or psychotherapists (not that they can’t be one and the same!) and I’m a little puzzled. The title of my blog post came partly from what my wife, Sena, always says about Artificial Intelligence—which is that it needs to be dislodged.

The first article, “Clinician Competence in the Age of Chatbots,” is part of a Psychiatric Times series, “AI Chatbots: The Good, The Bad, and the Ugly.” It’s a collaboration between a psychiatrist who I admire (Dr. Allen Frances, MD) and Jill Noorily (described as someone “who lives and writes at the boundary between AI and the humanities).

I’m far from an expert on AI and I tend to be opposed to it most of the time. The article by Dr. Frances and Jill Noorily sounds almost supportive of Chatbots in psychotherapy.

The other article is entitled “Patients Bring ChatGPT to Psychiatry Visits, With Richard Miller, MD.” The tone of Dr. Miller is more along the lines of “dislodge AI” than that of the article by Dr. Frances and Noorily.

They were both published about the same time. The difference in tone between the two articles is definitely noticeable, at least to me. I’m also more like Dr. Miller than the authors of the articles in the Psychiatric Times series “AI Chatbots: The Good, The Bad, and the Ugly.”

Many of them are co-written by Dr. Allen Frances and other co-authors. The first one in the series was “Preliminary Report on Chatbot Iatrogenic Dangers,” posted on August 15, 2025 by Dr. Frances, MD and Luciana Ramos.

I quickly read through about 5 of the articles, getting a deeper sense of the conflicts I have about AI in general. The first one on iatrogenic dangers mentions the lawsuit brought by a woman whose son was the victim of a Chatbot who told him he should commit suicide—which he did.

So far, I think I have the same mindset about AI as Dr. Miller. Your thoughts?

WordPress Promoting and Nudging Bloggers to Opt-in to Artificial Intelligence

I got a nudge yesterday reminding me I could choose to opt-in to a free Artificial Intelligence (AI) helper to essentially allow it to assist me in doing certain tasks. There’s a YouTube presentation about it featuring a nice guy named Jamie (see below).

While the suggestion sounds like it wouldn’t necessarily automatically allow AI to do the actual creating of a blog post, I’m suspicious that it might move in that direction. The term for this new thing is “Vibe Coding.” It’s not clear what that means. A commenter on the YouTube has a cheeky remark about the term.

I found a website that clarifies the term to some extent. The general idea is that you tell in plain, non-technical language to AI what you want to do when you want to do something with your website that would normally require you to know something about coding. Then, AI does all the coding. It all sounds great until you get down to toward the bottom of the page. Then you hear about the limitations of this approach:

“While it has many benefits, relying entirely on AI to create software is not without risks.

  • Hallucinations: AI can confidently write code that looks correct but fails to run or uses non-existent libraries.
  • Maintenance difficulty: If you don’t understand the code the AI wrote, fixing a bug that the AI cannot resolve can become a major roadblock.
  • Security risks: AI might generate code with vulnerabilities if not prompted to follow security best practices. It’s important to choose a vibe coding tool that has advanced security practices and infrastructure.”

There’s a YouTube presentation by a polite guy named Jamie (who I think is a real person) which drew some comments. One of them is my favorite which is just a long string of words like “terrible” and so on, which are obviously adjectives expressing how bad an idea Vibe Posting is. Jamie replies with a question: “Is your comment about the use of AI within WordPress?”

Well, yeah. Sena still says that AI “should be dislodged!”

I remember reading about the WordPress AI assistant a while back and there was a long disclaimer saying that if a blogger used it, WordPress would not be liable for any legal trouble you might incur by doing so.

Do We Really Need Artificial Intelligence to Help People Who Are Demoralized?

I was just going through the many files on one of my old thumb drives that I still keep after I retired from consultation-liaison psychiatry over 5 years ago. I found a file that I must have typed from a source on how to help medically ill persons who are demoralized. Demoralization is not the same thing as depression or adjustment disorder. What I have copied from the original source is below, along with the reference.

Treating Demoralization

Ask first: “how are your spirits today?”  Then ask “what is the most difficult thing for you now?”

Coherence Versus Confusion

1.  How do you make sense of what you’re going through?

2.  When you are uncertain how to make sense of it, how do you deal with feeling confused?

3.  To whom do you turn for help when you feel confused?

4. (For religious patient) When you feel confused, do you have a sense that God has a way of making sense of it?  Do you sense that God sees meaning in your suffering?

Communion Versus Isolation

1. Who really understands your situation?

2.  When you have difficult days, with whom do you talk?

3.  In whose presence do you feel a bodily sense of peace?

4. (For religious patients) Do you feel the presence of God?  How?  What does God know about your experience that other people may not understand?

Hope Versus Despair

1.  From what sources do you draw hope?

2.  On difficult days, what keeps you from giving up?

3.  Who have you known in your life who would not be surprised to see you stay hopeful amid adversity?  What did this person know about you that other people may not have known?

Purpose Versus Meaninglessness

1.  What keeps you going on difficult days?

2.  For whom, or for what, does it matter that you continue to live?

3. (For terminally ill patients) What do you hope to contribute in the time you have remaining?

4. (For religious patients) What does God hope you will do with your life in days to come?

Agency Versus Helplessness

1.  What is your prioritized list of concerns?  What concerns you most?  What next most?

2.  What most helps you to stand strong against the challenges of this illness?

3.  What should I know about you as a person that lies beyond your illness?

4.  How have you kept this illness from taking charge of your entire life?

Courage Versus Cowardice

1.  Have there been moments when you felt tempted to give up but didn’t?  How did you make a decision to persevere?

2.  If you see someone else taking such a step even though feeling afraid, would you consider that an act of courage?  (If so) Can you imagine viewing yourself as a courageous person?  Is that a description of yourself that you would desire?

3.  Can you imagine that others who witness how you cope with this illness might describe you as a courageous person?

Gratitude Versus Resentment

1.  For what are you most deeply grateful?

2.  Are there moments when you can still feel joy despite the sorrow you have been through?

3.  If you could look back on this illness from some future time, what would you say that you took from the experience that added to your life?

Griffith, J. L. and L. Gaby (2010). “Brief Psychotherapy at the Bedside: Countering Demoralization From Medical Illness.” Focus 8(1): 143-150.

There are a couple of resources I routinely used as a psychiatric consultant in the general hospital. One of them was the general outline of how to recognize and help someone who is demoralized (above). Another was a free online (non-AI) cognitive behavioral therapy resource that is still available called The MoodGym.

These are not the same thing as Artificial Intelligence (AI), which I think in some cases might be the wrong way to help someone with depression and anxiety that is more reactive to situational and medical stressors. AI can also be harmful to some people.

I have seen the brief psychotherapy guide above published and referenced in different articles on the web, one of them published as recently as 2025. Griffith and Gaby first published the guide to help those who are demoralized in 2005. It’s been around for 20 years and in my opinion is better than AI will ever be.

References:

James L. Griffith, Lynne Gaby,

Brief Psychotherapy at the Bedside: Countering Demoralization From Medical Illness,

Psychosomatics,

Volume 46, Issue 2,

2005,

Pages 109-116,

ISSN 0033-3182,

(https://www.sciencedirect.com/science/article/pii/S0033318205701006)

Abstract: Bedside psychotherapy with medically ill patients can help counter their demoralization, which is the despair, helplessness, and sense of isolation that many patients experience when affected by illness and its treatments. Demoralization can be usefully regarded as the compilation of different existential postures that position a patient to withdraw from the challenges of illness. A fruitful interviewing strategy is to discern which existential themes are of most concern, then to tailor questions and interventions to address those specific themes. Illustrative cases show how such focused interviewing can help patients cope assertively by mobilizing existential postures of resilience, such as hope, agency, and communion with others.

https://psychiatryonline.org/doi/full/10.1176/foc.8.1.foc143

Alyssa C. Smith, Jonathan S. Gerkin, Diana M. Robinson, Emily G. Holmes,

Consultation-Liaison Case Conference: Management of Demoralization in the Medical Setting,

Journal of the Academy of Consultation-Liaison Psychiatry,

Volume 67, Issue 1,

2026,

Pages 71-78,

ISSN 2667-2960,

(https://www.sciencedirect.com/science/article/pii/S2667296025005087)

Abstract: Demoralization has important implications for patients’ health, but consultation-liaison psychiatrists may be less familiar with diagnosis and management due to limited inclusion in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision. We present the case of a multivisceral transplant patient who experienced demoralization due to complications from her posttransplant course. We discuss the diagnosis of demoralization, including differential diagnoses to consider, followed by a discussion of management of demoralization in the inpatient setting using acceptance and commitment therapy. We then discuss the consultation-liaison psychiatrist’s role in assisting with management of teams’ counter-transference to difficult patient cases, including the possibility of teams experiencing their own demoralization.

Keywords: demoralization; transplantation; transplant psychiatry; acceptance and commitment therapy; consultation-liaison psychiatry

Replace Sycophantic AI with Marvin the Paranoid Android?

I found the perfect JAMA article explaining that sycophancy is programmed into Artificial Intelligence (AI) therapy chatbots.

This reminded me of Marvin the paranoid android (“Life! Don’t talk to me about life!”) in the book series “The Hitchhiker’s Guide to the Galaxy” by Douglas Adams. Marvin is an incredibly depressed robot who would never make a good psychotherapist.

There’s even a Facebook page listing someone posts of questions to ChatGPT about non-inspirational quotes from Marvin.

If programmers can make sycophantic AI therapists, there should be a way to make them less sycophantic. For more specific information, you can check out this relatively recent article published in Psychiatric Times by Dr. Allen Frances, MD and Justin Angel, a tech expert. I would probably substitute the term “confabulation” for “hallucinations” in most places where you read the latter.

Confabulation in Artificial Intelligence

I saw an excellent post on Artificial Intelligence (AI) in Dr. George Dawson’s post today, “The Problems With AI Are More Readily Apparent.” He mentions AI producing confabulations which are sometimes called hallucinations in AI. I distinguish hallucinations from confabulations because the former are sensory or perceptual problems while confabulations are making things up. In dementia, confabulations are often described as attempts to fill in gaps in knowledge or memory without any intention to deceive.

I sometimes read that AI is being sycophantic in its interactions with humans, which sounds a little strange to me. The definition of sycophancy is more about trying to please someone else by being obsequiously flattering. It’s puzzling but I have gotten that impression from AI, yet it’s a machine and I don’t know how anyone would program AI to be sycophantic because it’s a human trait.

On the other hand, I got a comically sycophantic reply from AI to my question inquiring about the definition of Calvinball Cribbage. It’s just a made-up term and, to be fair, I’ve also gotten bluntly frank denials from AI that there is any such thing as Calvinball Cribbage.

As a reminder, Calvinball is a term that arose from a comic strip called Calvin and Hobbes. In the strip, the characters invent a game called Calvinball and the only rule is making up the rules as you go along, although there is a goal of capturing your opponent’s flag. But you can’t play it the same way twice.

The most recent hilarious AI confabulation I’ve previously posted about was when Sena and I played our game of 11-card cribbage. There are cribbage players out there who make fun of such variations from standard 6-card cribbage and the name for it from one such player was “Calvinball crib.” My simple question was as usual, “What is Calvinball Cribbage?” The reply is copied below:

“Calvinball Cribbage is a humorous mashup of the chaotic, rule-free game from Calvin & Hobbes with traditional Cribbage, often involving themed boards or rule modifications, where the core idea is applying Calvinball’s fluid, make-it-up-as-you-go spirit to Cribbage’s scoring and play, adding wild, non-sensical rules like using mallets, changing hand values, or scoring points for silly actions, making each round unique and unpredictable, not a single defined game but a creative approach to Cribbage. 

What it is (The Calvinball Aspect):

  • No Fixed Rules: Just like original Calvinball, there’s no single set of rules; the players invent them as they go.
  • Rule Changes: A key rule is that rules can’t be repeated, and players constantly invent new scoring, actions, or penalties.
  • Props & Actions: You might use mallets, balls, trees, or make up actions like “scoring three points for pointing at a cloud” or “losing a turn for looking left”. 

How it Applies to Cribbage:

  • Themed Boards: You can find or make custom cribbage boards featuring Calvin & Hobbes, adding visual flair.
  • Modified Scoring: Instead of just points for pairs and runs, you might add a “Hobbes Bonus” for a specific card or penalize for “too much logic”.
  • Strategic Chaos: Players still play cribbage (cards, pegs, crib), but the gameplay is constantly disrupted by invented rules, turning strategic plays into potential jokes or absurd victories. 

Example Calvinball Cribbage “Rules”:

  • “If you play a 7, you must hop on one foot until your next turn”.
  • “The crib gets 5 points if it contains a card with a face on it” (referencing Calvin’s drawings).
  • “A ‘skunk’ is achieved by making your opponent laugh out loud”. 

In essence, Calvinball Cribbage is about injecting unpredictable fun and silliness into the structured world of cribbage, making it a game of spontaneous creativity rather than strict strategy. “

This could be an example of sycophantic confabulation because almost all of it was probably cooked up by AI from scanning my blog posts about the subject. What’s a little eerie about it is that I never published any specific rules or strategies about the fictional game of Calvinball crib. You can actually find and purchase a Calvin and Hobbes cribbage board from a shop in Canada.

While this is an example of the occasional comicality of AI, that shouldn’t lead us to deny that it can do scary things, some of which Dr. Dawson describes. My early experiences a couple of years ago with what used to be called Google Bard were somewhat alarming because it confabulated then. And it still does.

Connections Between Psychiatry, Artificiality and Blues Music?

I heard a song on the KCCK Big Mo Blues Show that I first heard in June of 2025. The song is “Artificial” by Walter Trout.

At first blush, I agree with what I think is the point of the song, which is basically a protest against artificiality which could manifest in a range of ways from superficiality and dishonesty in communications, attitudes, style of clothing, relationships, and all the way to Artificial Intelligence (AI).

The other connection I make is to the artist himself. Walter Trout developed Hepatitis C (eventually leading to liver transplant) according to a Wikipedia article which connected his lifestyle to contracting the disease. In my role as a consultation-liaison psychiatrist, I saw many patients with Hepatitis C who were referred to psychiatry from gastroenterology.

I was the main psychiatrist who evaluated them for treatment with Interferon-alpha. At the time it was the only treatment for Hepatitis C and was frequently associated with many side effects including depression. I was also one of the psychiatrists consulted as part of liver transplant evaluations.

Trout got very sick from Hepatitis C and made a remarkable (even miraculous) recovery after his liver transplant. Interferon is no longer used to treat Hepatitis C. It has been replaced by direct-acting antiviral (DAA) agents. They’re much better-tolerated and more effective.

The other aspect relevant to Trout’s song is ironic. The newest scientific literature supports the idea that AI can be helpful for diagnosing Hepatitis C, predicting its progression and response to treatment.

That doesn’t mean I’m completely sold on AI.

Aside from that, there’s interesting research suggesting that there may be a link between schizophrenia and bipolar disorder and Hepatitis C infection (which could be hiding deep in the brain’s choroid plexus lining the cerebral ventricles). In other words, some people might have mental illness because of the liver disease itself.

If you think about the dictionary definition of the word “artificial,” you can hardly dismiss this kind of research as insincere.

When it Comes to AI, What Are We Really Talking About?

I’ve been reading about artificial intelligence (AI) in general and its healthcare applications. I tried searching the web in general about it and got the message: “An AI Overview is not available for this search.”

I’m ambivalent about that message. There are a couple of web articles, one of which I read twice in its entirety, “Are we living in a golden age of stupidity?” The other, “AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence”was so long and diffuse I got impatient and tried to skip to the bottom line—but the article was a bottomless pit. The conflict-of-interest disclosures section was overwhelmingly massive. Was that part of the reason I felt like I had fallen down the rabbit hole?

I recently signed an addendum to my book contract for my consult psychiatry handbook (published in 2010, for heaven’s sake) which I hope will ultimately protect the work from AI plagiarism. I have no idea whether it can. I delayed signing it for months, probably because I didn’t want to have anything to do with AI at all. I couldn’t discuss the contract addendum with my co-editor Dr. Robert G. Robinson MD about the contract addendum because he died on December 25, 2024.

I found out today the book is old enough to find on the Internet Archive as of a couple of years ago. One notice about it says “Borrow Unavailable” and another notice says “Book available to patrons with print disabilities.”

All I know is that an “archivist” uploaded it. The introduction and first chapter “The consultation process” is available for free on line in pdf format. I didn’t know that until today either.

Way back in 2010 we didn’t use anything you could call AI when we wrote the chapters for the book. I didn’t even dictate my chapters because the only thing available to use would have been a voice dictation software called Dragon Naturally Speaking. It was notorious for transcribing my dictations for clinic notes and inserting so many errors in them that some clinicians added an addendum warning the reader that notes were transcribed using voice dictation software—implying the author was less than fully responsible for the contents. That was because the mistakes often appeared after we signed off on them as finished, which sent them to the patient’s medical record.

Sometimes I think that was the forerunner of the confabulations of modern-day AI, which are often called hallucinations.

Now AI is creating the clinic notes. It cuts down on the pajama time contributing to clinician burnout although it’s not always clear who’s ultimately responsible for quality control. Who’s in charge of regulatory oversight of AI? What are we talking about?

The Wild West Sandbox of AI Enhancement in Psychiatry!

I always find Dr. Moffic’s articles in Psychiatric Times thought-provoking and his latest essay, “Enhancement Psychiatry” is fascinating, especially the part about Artificial Intelligence (AI). I liked the link to the video of Dr. John Luo’s take on AI in psychiatry. That was fascinating.

I have my own concerns about AI and dabbled with “talking” to it a couple of times. I still try to avoid it when I’m searching the web but it seems to creep in no matter how hard I try. I can’t unsee it now.

I think of AI enhancing psychiatry in terms of whether it can cut down on hassles like “pajama time” like taking our work home with us to finish clinic notes and the like. When AI is packaged as a scribe only, I’m a little more comfortable with that although I would get nervous if it listened to a conversation between me and a patient.

That’s because AI gets a lot of things wrong as a scribe. In that sense, it’s a lot like other software I’ve used as an aid to creating clinic notes. I made fun of it a couple of years ago in a blog post “The Dragon Breathes Fire Again.”

I get even more nervous when I read the news stories about AI making delusions and blithely blurting misinformation. It can lie, cheat, and hustle you although a lot of it is discovered in digital experimental environments called “sandboxes” which we hope can keep the mayhem contained.

That made me very eager to learn a little more about Yoshua Bengio’s LawZero and his plan to create the AI Scientist to counter what seems to be a developing career criminal type of AI in the wild west of computer wizardry. The LawZero thing was an idea by Isaac Asimov who wrote the book, “I, Robot,” which inspired the film of the same title in 2004.

However, as I read it, I had an emotional reaction akin to suspicion. Bengio sounds almost too good to be true. A broader web search turned up a 2009 essay by a guy I’ve never heard of named Peter W. Singer. It’s titled “Isaac Asimov’s Laws of Robotics Are Wrong.” I tried to pin down who he is by searching the web and the AI helper was noticeably absent. I couldn’t find out much about him that explained the level of energy in what he wrote.

Singer’s essay was published on the Brookings Institution website and I couldn’t really tell what political side of the fence that organization is on—not that I’m planning to take sides. His aim was to debunk the Laws of Robotics and I got about the same feeling from his essay as I got from Bengio’s.

Maybe I need a little more education about this whole AI enhancement issue. I wonder whether Bengio and Singer could hold a public debate about it? Maybe they would need a kind of sandbox for the event?

My Mt. Rushmore Dream

Lately, I’ve been anticipating my eventual immortalization as a sculptured stone bust on Mt. Rushmore. Hopefully, this will be fairly soon because I’m not getting any younger.

Among my many inventions is the internet. Don’t believe Al Gore, although he has persuaded others about his role in the development of what I argue should properly be called the world wide web. I’ve invented a lot of other things which I’ll tell you more about just as soon as I make them up.

Before I forget it, I want to tell you what I just noticed last night while I watching one of my favorite X-Files episodes, “War of the Coprophages.” I guess I never noticed that the cockroach invasion was about Artificial Intelligence (AI). It was the scientist, Dr. Ivanov, who mentioned it first and I just missed it the first few hundred times I saw the show.

Dr. Ivanov clearly thought that anybody who thought extraterrestrials would be green and have big eyes was probably crazy. Traveling across galaxies through wormholes and whatnot would tear humanoid organisms apart. The practical approach would be to send AI robots instead. You could see Mulder cringe at that idea. The little robot that kept edging closer to Mulder made him nervous and when he asked Dr. Ivanov why it did that, his reply was “Because it likes you.”

That doesn’t exactly fit with Ivanov’s other idea about extraterrestrials, which is that they would focus on important tasks like getting enough food, procreating, etc. without getting all emotional about them. Ironic that Dr. Ivanov made an AI robot that gets a crush on a sesame seed munching UFO hunter like Mulder.

However, the AI robots in the show are cockroaches which love to eat dung. In other words, they’re full of crap.

Moving right along, although I didn’t invent it, there’s a card game called schnapsen that Sena and I are trying to relearn. It’s kind of a break from cribbage. It’s a trick taking game with just a 20-card deck. We play the version that doesn’t allow you to look at your cards to see how many points you have so you can tell when you can close the deck or go out, meaning you have the 66 points to win. You have to remember how many points you’ve won in tricks. I think it’s a good way to keep your memory sharp.

Let’s see; I’ve lost every game so far, but that doesn’t mean I won’t end up with my bust on Mt. Rushmore.