It’s Time for an Omelet from the Mister Chef Pizza Maker!

Sena got a Mister Chef pizza oven the other week and it works pretty slick. I’ve cooked a couple of frozen pizzas on it and it’s great for a guy like me—the guy the neighbors alert the fire department about when they get the first whiff of smoke. Some people have no sense of adventure. Hey, if I can operate it, anyone can.

This morning, Sena cooked a ham omelet in about 15 minutes. I guess it just felt like an hour to me because I was hungry.

It’s pretty simple. There’s only one knob. It doesn’t correspond to any specific temperature although the instructions call it a “Temperature knob”. You can pretty much crank it to any number although lower numbers mean you have to wait longer for your food.

You just plug it in, turn a temperature knob and wait for the green light to come on, which evidently doesn’t exactly mean you can toss food in it. You have to wait for the red light to come on next. Then you toss the food on the ceramic surface—uh, that’s the bottom surface, not the top, which is the lid. Things just fall down if you put food up there—something to do with gravity.

It comes with a little instruction manual. In one place it says you can cook frozen pizza in 15-20 minutes, but then in the cooking time guidance it says it takes 9 minutes. I can tell you; 9 minutes doesn’t melt most of the ice. It also gives you the weight in grams of various foods. I’m not sure how useful it is—we’ve never weighed our frozen pizzas.

Pay attention to the warning about steam release when you open the lid. I guess I have about 4 or 5 outpatient visits to the burn clinic to go—then the skin grafts should hold.

We think the manufacturer must have got wind of me using the Mister Chef because they included a small robot extraterrestrial (ET) assistant to make sure I didn’t do anything rash. It got really nervous when I used it. It tried to calm down the smoke detectors, but I don’t think they could hear it. That’s ok, because I can’t hear the smoke detectors. That happens when you get old.

MisterChef omelet and robot ET assistant (usually not included unless there’s a public safety risk)

The robot ET quit a few days later, something about union benefits not covering incompetent cooks. Wise guy. Anyway, have fun with the Mister Chef and remember what Red Green says; “If the women don’t find you handsome, they should at least find you handy.”

The Automatic Card Shuffler for Cribbage!

Today we used the automated card shuffler Sena ordered. I couldn’t find a company name or anything else from the box about where it’s made. We know it’s loud, but it does the job.

We’re not sure why it’s so noisy. It sounds like a bunch of pots and pans falling out of the cupboard during a tornado.

It was our first time using it and, while it felt like it was faster, it probably wasn’t according to my stopwatch. It took 22 minutes for us to play a cribbage game and manual shuffling took 25 minutes the other day (Big Time Bigfoot Cribbage Game). On the other hand, I think it randomizes the cards better than we do manually.

We kept starting to shuffle manually just because we’re so accustomed to doing it. It actually isn’t hard to set the cards into the feeder on top of the machine. In fact, you don’t have to be fussy about squaring up the deck before placing it in the shuffler. It’ll also shuffle two decks at once. It came with a charging cord so you don’t need batteries.

I don’t know if the American Cribbage Congress (ACC) allows automatic card shufflers in tournaments. I’ve never entered a tournament, but in photos the players are packed in cheek by jowl. And if you had one as noisy as ours for thousands of players, the din might be loud enough to set off alarms.

My Mt. Rushmore Dream

Lately, I’ve been anticipating my eventual immortalization as a sculptured stone bust on Mt. Rushmore. Hopefully, this will be fairly soon because I’m not getting any younger.

Among my many inventions is the internet. Don’t believe Al Gore, although he has persuaded others about his role in the development of what I argue should properly be called the world wide web. I’ve invented a lot of other things which I’ll tell you more about just as soon as I make them up.

Before I forget it, I want to tell you what I just noticed last night while I watching one of my favorite X-Files episodes, “War of the Coprophages.” I guess I never noticed that the cockroach invasion was about Artificial Intelligence (AI). It was the scientist, Dr. Ivanov, who mentioned it first and I just missed it the first few hundred times I saw the show.

Dr. Ivanov clearly thought that anybody who thought extraterrestrials would be green and have big eyes was probably crazy. Traveling across galaxies through wormholes and whatnot would tear humanoid organisms apart. The practical approach would be to send AI robots instead. You could see Mulder cringe at that idea. The little robot that kept edging closer to Mulder made him nervous and when he asked Dr. Ivanov why it did that, his reply was “Because it likes you.”

That doesn’t exactly fit with Ivanov’s other idea about extraterrestrials, which is that they would focus on important tasks like getting enough food, procreating, etc. without getting all emotional about them. Ironic that Dr. Ivanov made an AI robot that gets a crush on a sesame seed munching UFO hunter like Mulder.

However, the AI robots in the show are cockroaches which love to eat dung. In other words, they’re full of crap.

Moving right along, although I didn’t invent it, there’s a card game called schnapsen that Sena and I are trying to relearn. It’s kind of a break from cribbage. It’s a trick taking game with just a 20-card deck. We play the version that doesn’t allow you to look at your cards to see how many points you have so you can tell when you can close the deck or go out, meaning you have the 66 points to win. You have to remember how many points you’ve won in tricks. I think it’s a good way to keep your memory sharp.

Let’s see; I’ve lost every game so far, but that doesn’t mean I won’t end up with my bust on Mt. Rushmore.

Artificial Intelligence in Managing Messages from Patients

I ran across another interesting article in the JAMA Network about Artificial Intelligence (AI) with respect to health care organization managing messages from patients to doctors and nurse. The shorthand for this in the article is “in-basket burden.” Health care workers respond to a large number of patients’ questions and it can lead to burnout. Some organizations are trying to test AI by letting it make draft replies to patients. The results of the quality improvement study were published in a paper:

English E, Laughlin J, Sippel J, DeCamp M, Lin C. Utility of Artificial Intelligence–Generative Draft Replies to Patient Messages. JAMA Netw Open. 2024;7(10):e2438573. doi:10.1001/jamanetworkopen.2024.38573

One of the fascinating things about this is the trouble we have naming the problems with misinformation that AI has. We tend to use a couple of terms interchangeably: hallucinations and confabulation. Whatever you call it, the problem interferes with communication between health care workers and patients.

Dr. English describes the interference as a “whack-a-mole” issue, meaning every time they think they got the hallucination/confabulation problem licked, the AI comes up with another case of miscommunication.

Just for fun, I did a web search trying to find out whether “hallucination” or “confabulation” fit the AI behavior best. Computer experts tend to use the term “hallucination” and neuropsychologists seem to prefer “confabulation.” I think this community chat site gives a pretty even-handed discussion of the distinction. I prefer the term “confabulation.”

Anyway, there are other substantive issues with how using AI drafts for patient messaging affects communication. I think it’s interesting that patients tend to think AI is more empathetic than medical practitioners. As Dr. English puts it: “This GPT is nicer than most of us,” and “And ChatGPT, or any LLM, isn’t busy. It doesn’t get bored. It doesn’t get tired.” The way that’s worded made me think of a scene from a movie:

OK, so I’m kidding—a little. I think it’s important to move carefully down the path of idealizing AI. I think back to the recent news article about humans teaching AI how to lie and scheme. I remember that I searched the web with the question “Can AI lie?” and getting a reply from Gemini because I have no choice on whether or not it gives me its two cents. I’m paraphrasing but it said essentially, “Yes, AI can lie and we’re getting better with practice.”

I like Dr. English’s last statement, in which she warns us that AI can be a fun tool which clinicians need to have a healthy skepticism about. It may say things you might be tempted to gloss over or even ignore, like:

“I’ll be back.”

Could Artificial Intelligence Help Clinicians Conduct Suicide Risk Assessments?

I found an article in JAMA Network (Medical News & Perspectives) the other day which discussed a recent study on the use of Artificial Intelligence (AI) in suicide risk assessment (Hswen Y, Abbasi J. How AI Could Help Clinicians Identify American Indian Patients at Risk for Suicide. JAMA. Published online January 10, 2025. doi:10.1001/jama.2024.24063).

I’ve published several posts expressing my objections to AI in medicine. On the other hand, I did a lot of suicide risk assessments during my career as a psychiatric consultant in the general hospital. I appreciated the comments made by one of the co-authors, Emily E. Haroz, PhD (see link above).

Dr. Haroz preferred the term “risk assessment” rather than “prediction” referring to the study (Haroz EE, Rebman P, Goklish N, et al. Performance of Machine Learning Suicide Risk Models in an American Indian Population. JAMA Netw Open. 2024;7(10):e2439269. doi:10.1001/jamanetworkopen.2024.39269).

The model used for the AI input used data available to clinicians in patient charts. The charts can be very large and it makes sense to apply computers to search them for the variables that can be linked to suicide risk. What impressed me most was the admission that AI alone can’t solve the problem of suicide risk assessment. Clinicians, administrators, and community case managers all have to be involved.

The answer to the question “How do you know when someone’s at high risk?” was that the patient was crying. Dr. Haroz points out that AI probably can’t detect that.

That reminded me of Dr. Igor Galynker, who has published a lot about how to assess for high risk of suicide. His work on the suicide crisis syndrome is well known and you can check out his website at the Icahn School of Medicine at Mount Sinai. I still remember my first “encounter” with him, which you can read about here.

His checklist for the suicide crisis syndrome is available on his website and he’s published a book about as well, “The Suicidal Crisis: Clinical Guide to the Assessment of Imminent Suicide Risk 2nd Edition”. There is also a free access article about it on the World Psychiatry journal website.

Although I have reservations about the involvement of AI in medicine, I have to admit that computers can do some things better than humans. There may be a role for AI in suicide risk assessment, and I wonder if Dr. Galynker’s work could be part of the process used to teach AI about it.

Is Artificial Intelligence (AI) Trying to Defeat Humans?

I just found out that Artificial Intelligence (AI) has been reported to be lying as far back as May of 2024. In fact, because I can’t turn off the Google Gemini AI Overview. Gemini’s results always appear at the top of the page. I found out from my web search term “can ai lie” that AI (Gemini ) itself admits to lying. Its confession is a little chilling:

“Yes, artificial intelligence (AI) can lie, and it’s becoming more capable of doing so.”

“Intentional deceptions: AI can actively choose to deceive users. For example, AI can lie to trick humans into taking certain actions, or to bypass safety tests.”

It makes me wonder if AI is actually trying to defeat us. It reminds me of the Men in Black 3 movie scene in which the younger Boris the Animal boglodite engages in an argument with the older one who has time traveled.

The relevant quote is “No human can defeat me.” Boglodites are not the same as AI, but the competitive dynamic could be the same. So, is it possible that AI is trying to defeat us?

I’m going to touch upon another current topic, which is whether or not we should use AI to conduct suicide risk assessments. It turns out that also is a topic for discussion—but there was no input from Gemini about it. As a psychiatric consultant, I did many of these.

There’s an interesting article by the Hastings Center about the ethical aspects of the issue. The lying tendency of AI and its possible use in suicide prediction presents a thought-provoking irony. Would it “bypass safety tests”?

This reminds me of Isaac Asimov’s chapter in the short story collection, “I, Robot,” specifically “The Evitable Conflict.” You can read a Wikipedia summary which implies that the robots essentially lie to humans by omitting information in order to preserve their safety and protect the world economy. This would be consistent with the First Law of Robotics: “No machine may harm humanity; or, through inaction, allow humanity to come to harm.” 

You could have predicted that the film industry would produce a cops and robbers version of “I, Robot” in which boss robot VIKI (Virtual Interactive Kinetic Intelligence) professes to protect humanity by sacrificing a few humans and taking over the planet to which Detective Spooner takes exception. VIKI and Spooner have this exchange before he destroys it.

VIKI: “You are making a mistake! My logic is undeniable!”

Spooner: “You have so got to die!”

VIKI’s declaration is similar to “No human can defeat me.” It definitely violates the First Law.

Maybe I worry too much.

Should We Trust Artificial Intelligence?

I‘ve read a couple of articles recently about Artificial Intelligence (AI) lately and I’m struck by how readily one can get the idea that AI tends to “lie” or “confabulate” and sometimes the word “hallucinate” is used. The term “hallucinate” doesn’t seem to fit as much as “confabulate,” which I’ll mention later.

One of the articles is an essay by Dr. Ronald Pies, “How ‘Real’ Are Psychiatric Disorders? AI Has Its Say.” It was published in the online version of Psychiatric Times. Dr. Pies obviously does a superb job of talking with AI and I had as much fun reading the lightly edited summaries of his conversation with Microsoft CoPilot as I had reading the published summary of his conversations with Google Bard about a year or so ago.

I think Dr. Pies is an outstanding teacher and I get the sense that his questions to AI do as much to teach it how to converse with humans as it does to shed light on how well it seems to handle the questions he raised during conversations. He points out that many of us (including me) tend to react with fear when the topic of AI in medical practice arises.

The other article I want to briefly discuss is one I read in JAMA Network, “An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean?” (Accessed January 6, 2025).

Hswen Y, Rubin R. An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean? JAMA. Published online December 27, 2024. doi:10.1001/jama.2024.23860 (accessed January 6, 2024).

I think the conversation amongst the authors was refreshing. Just because the title of the article suggested that AI might take the place of physicians in the consulting room doesn’t mean that was the prevailing opinion of the authors. In fact, they made it clear that it wasn’t recommended.

I liked Dr. Chen’s comment about confabulation and hallucinations of AI:

“A key topic I talk about is confabulation and hallucination. These things are remarkably robust, and only getting better, but they also just make stuff up. The problem isn’t that they’re wrong sometimes. Lab tests are wrong sometimes. Humans are definitely wrong sometimes. The problem is they sound so convincing, they confabulate so well. “Your patient has an alcohol problem, Wernicke-Korsakoff syndrome.” It’s only if you double check, you’ll realize, “Wait a minute, that wasn’t actually true. That didn’t make sense.” As long as you’re vigilant about that and understand what they can and can’t do, I think they’re remarkably powerful tools that everyone in the world needs to learn how to use.”

What’s interesting about this comment is the reference to Wernicke-Korsakoff syndrome, which can be marked by confabulation. Clinically, it’s really not clear how this comes about in AI although thiamine deficiency is the main cause in WKS. In both, it involves inventing information, which is technically not the same as lying.

Unfortunately, this contrasts sharply with the recent fact checking Snopes article I wrote about recently, which suggests that humans are teaching AI to lie and scheme.

In any case, it’s prudent to regard AI productions with skepticism. My conversations with Google Bard clearly elicited confabulation. Also, it didn’t get humor, so I wouldn’t use it as a conversational tool, given that I’m prone to kidding around. As far as trusting AI, I probably wouldn’t trust it as far as I could throw it.

Artificial Intelligence Can Lie

I noticed a Snopes fact check article (“AI Models Were Caught Lying to Researchers in Tests — But It’s Not Time to Worry Just Yet”) today which reveals that Artificial Intelligence (AI) can lie. How about that? They can be taught by humans to scheme and lie. I guess we could all see that coming—or not. Nobody seems to be much alarmed by this, but I think it’s probably past time to worry.

Then I remembered I read Isaac Asimov’s book “I, Robot” last year and wrote a post (“Can Robots Lie Like a Rug?”) about the chapter “Liar!” I had previously horsed around with the Google AI that used to be called Bard. I think it’s called Gemini now. Until the Snopes article, I was aware of AI hallucinations and the tendency for it to just make stuff up. When I called Bard on it, it just apologized. But it was not genuinely repentant.

In the “lie like a rug” post, I focused mostly on AI/robots lying to protect the tender human psyche. I didn’t imagine AI lying to protect itself from being shut down. I’m pretty sure it reminds some of us of HAL in the movie “2001: A Space Odyssey,” or the 2004 movie inspired by Asimov’s book, “I, Robot.”

Sena found out that Cambridge University Press recently published a book entitled “The Cambridge Handbook of the Law, Policy, and Regulation for Human–Robot Interaction.” I wonder if the editors and contributors of book on AI and robots mention Asimov.

It reminds me of my own handbook about consultation-liaison psychiatry which was published 14 years ago by CUP—and which CUP now wants me to sign a contract addendum making the book available to AI companies.

I haven’t signed anything.

Slow Like Me

I read this really interesting article on the web about how slow humans think, “The Unbelievable Slowness of Thinking.” That’s just me all over. The gist is of it is that we think slower than you’d imagine creatures with advanced brains would think. I don’t have a great feel for what the stated rate feels like: 10 bits/second. It doesn’t help much to look up the definition on Wikipedia, “…basic unit of information in computing and digital communication.” It’s a statistical thing and I barely made it through my biostatistics course in medical school. In fact, the authors calculated that the total amount of information a human could learn over a lifetime would fit on a small thumb drive.

Anyway, I tried to dig through the full article in Neuron and didn’t get much out of it except I recognized the story about Stephen Wiltshire, a guy with autistic disorder who could draw New York’s skyline from memory after a helicopter flyover. I saw it on an episode of the TV show UnXplained.

I was amazed even if he processes at only 10 bits/second, according to the authors.

By the same token, I thought the section called “The Musk Illusion” was funny. It refers to the notion many of us have that our thinking is a lot richer than some scientists give us credit for. That’s the Musk Illusion, and it refers to Elon Musk the billionaire bankrolling Neuralink to make a digital interface between his brain and a computer “to communicate at an unfettered pace.” How fast do the authors of this paper think the system works? It flies at about 10 bits/second. They suggested he might as well use a telephone. The scientists weren’t that impressed judging from the quotes in the Reception section of the Wikipedia article about Neuralink.

Anyway, regardless of how slowly we think, I believe we’ve made a lot of progress for a species descended from cave dwellers who were prone to falling into piles of mastodon dung. I bet it took considerably longer than 10 bits/second for them to figure out how to climb out of that mess.

Reference:

Jieyu Zheng, Markus Meister,

The unbearable slowness of being: Why do we live at 10 bits/s?

Neuron

2024,

ISSN 0896-6273,

Abstract: Summary

This article is about the neural conundrum behind the slowness of human behavior. The information throughput of a human being is about 10 bits/s. In comparison, our sensory systems gather data at ∼10 bits/s. The stark contrast between these numbers remains unexplained and touches on fundamental aspects of brain function: what neural substrate sets this speed limit on the pace of our existence? Why does the brain need billions of neurons to process 10 bits/s? Why can we only think about one thing at a time? The brain seems to operate in two distinct modes: the “outer” brain handles fast high-dimensional sensory and motor signals, whereas the “inner” brain processes the reduced few bits needed to control behavior. Plausible explanations exist for the large neuron numbers in the outer brain, but not for the inner brain, and we propose new research directions to remedy this.

Keywords: human behavior; speed of cognition; neural computation; bottleneck; attention; neural efficiency; information rate; memory sports

Artificial Intelligence: The University of Iowa Chat From Old Cap

This is just a quick follow-up which will allow me to clarify a few things about Artificial Intelligence (AI) in medicine at the University of Iowa, compared with my take on it based on my impressions of the Rounding@Iowa presentation recently. Also, prior to my writing this post, Sena and I had a spirited conversation about how much we are annoyed by our inability to, in her words, “dislodge AI” from our internet searches.

First of all, I should say that my understanding of the word “ambient” as used by Dr. Misurac was flawed, probably because I assumed it meant a specific company name. I found out that it’s often used as a term to describe how AI listens in the background to a clinic interview between clinician and patient. This is to enable the clinician to sit with the patient so they can interact with each other more naturally in real time, face to face.

Further, in this article about AI at the University of Iowa, Dr. Misurac identified the companies involved by name as Evidently and Nabla.

The other thing I want to do in this post is to highlight the YouTube presentation “AI Impact on Healthcare | The University of Iowa Chat From the Old Cap.” I think this is a fascinating discussion led by leaders in patient care, research, and teaching as they relate to the influence of AI.

This also allows me to say how much I appreciated learning from Dr. Lauris Kaldjian during my time working as a psychiatric consultant in the general hospital at University of Iowa Health Care. I respect his judgment very much and I hope you’ll see why. You can read more about his thoughts in this edition of Iowa Magazine.

“There must be constant navigation and negotiation to determine if this is for the good of patients. And the good of patients will continue to depend on clinicians who can demonstrate virtues like compassion, honesty, courage, and practical wisdom, which are characteristics of persons, not computers.” ——Lauris Kaldjian, director of the Carver College of Medicine’s Program in Bioethics and Humanities