Can Artificial Intelligence Learn from My Book?

Recently the publisher of a book co-edited by me and my former psychiatry chair Dr. Robert G. Robinson asked me to sign off on a proposal to involve Artificial Intelligence (AI) in using the work.

The book, “Psychosomatic Medicine: An Introduction to Consultation-Liaison Psychiatry,” is 14 years old now, but is still a practical tool for learners, at least in my opinion.

Interestingly, it looks like the publisher is also trying to contact Dr. Robinson about the proposal through me. I wonder if that means they’re having as much trouble finding him as I have.

I’ve made it clear that I have misgivings about AI, as I’ve indicated in at least one blog post about Google’s AI, which used to be called, Bard which I think has been renamed Gemini. I think AI is prone to spreading misinformation, which has been called “hallucinations” by some writers.

The publisher makes it clear that this proposal regarding AI involvement in our book is an “opt in” choice. I gather that means if I don’t opt in, they’ll continue to bug me about it until I do.

That’s unlikely to happen.

The Dragon Breathes Fire Again

Sena and I saw a news video about a technology called “DAX” which uses Artificial Intelligence (AI) the other day which promises to reduce or even eliminate pajama time for physicians trying to get their clinical note dictations done during the day instead of taking them home for several more hours of work.

The video was a demo of the technology, which looked like it recorded a clinical interview between the doctor and the news reporter. I didn’t see how exactly DAX was recording the interview without obvious audio equipment. Was it doing it through the smartphone speaker? This was very different from how I and many other clinicians dictated their notes using a headphone set at their desks in front of their desktop computers. It not only records but transcribes the interview.

Later, I discovered that DAX stands for Dragon Ambient Experience, made by Nuance which was acquired by Microsoft in 2022. I posted about Dragon products and their limitations last year. The product often produced hilarious mistakes during dictation which required careful editing. Sometimes more errors turned up after you completed it and these were visible in the patient’s medical record, which would then need to be corrected.

Several years ago, I remember talking to somebody from Dragon on the phone about the problems I was having. She was a little defensive when I told her I’d been having difficulty with Dragon for quite a while because it made so many mistakes.

A recent article on the web revealed that the errors continue with DAX. According to the article, “…it will make mistakes. Sometimes it might omit clinical facts; sometimes it may even hallucinate something.” I remember trying to communicate with the Google Bard AI, which seemed to do this pretty often. It made stuff up.

DAX is not cheap. The article reveals that one hospital pays $8,000-$10,000 per year per physician to use it. And skeptics worry that the system has too many bugs in it yet, which can lead to bias and inaccurate information which could negatively affect patient outcomes.

A recently published JAMA network article also urges caution in adoption of this sort of AI-assisted technology (Harris JE. An AI-Enhanced Electronic Health Record Could Boost Primary Care Productivity. JAMA. Published online August 07, 2023. doi:10.1001/jama.2023.14525).

In this case, I think it’s appropriate to say “I told you so.”

We Are All Still Learning to Play Pong

I noticed an article the other day about Monash University in Australia getting funding for further research into growing brain cells onto silicon chips and teaching them how to play cribbage.

Just kidding, the research is for teaching the modified brain cells tasks. They succeeded in teaching them goal-directed tasks like how to play the tennis-like game Pong last year. You remember Pong from the 1970s? Shame on you if you don’t. On the other hand, that means you probably didn’t frequent any beer taverns in your hometown while you were growing up—or that you’re just too young to remember.

The new research program is called Cortical Labs and has hundreds of thousands of dollars in funding. The head of the program, Dr. Razi, says it combines Artificial Intelligence (AI) and synthetic biology to make programmable biological computing platforms which will take over the world and bring back Pong!

It’s an ambitious project. The motto of Monash University is Ancora Imparo, which is Italian for “I am still learning.” It links humility and perseverance.

There’s a lot of suspicion out there about AI and projects like the Pong initiative in Australia. It could eventually grow into a vast industry run by robots who will run on a simple fuel called vegemite.

Shame on you if you don’t know what vegemite is!

Anyway, it reminds me that I recently finished reading Isaac Asimov’s book of science fiction short stories, “I, Robot.”

The last two stories in the book are intriguing. Both “Evidence” and “The Evitable Conflict” are generally about the conflict between humans and AI, which is a big controversy currently.

The robopsychologist, Dr. Susan Calvin, is very much on the side of AI (I’m going to use the term synonymously with robot) and thinks a robot politician would be preferable to a human one because of the requirement for the AI to adhere to the 3 Laws of Robotics, especially the first one which says AI can never harm a human or allow a human or through inaction allow a human to come to harm.

In the story “Evidence,” a politician named Stephen Byerley is suspected of being a robot by his opponent. The opponent tried to legally force Byerley to eat vegemite (joke alert!) to prove the accusation. This is based on the idea that robots can’t eat. This leads to the examination of the argument about who would make better politicians: robots or humans. Byerley at one point asks Dr. Calvin whether robots are really so different from men, mentally.

Calvin retorts, “Worlds different…, Robots are essentially decent.” She and Dr. Alfred Lanning and other characters are always cranky with each other. The stare savagely at one another and yank at mustaches so hard you wonder if the mustache eventually is ripped from the face. That doesn’t happen to Calvin; she doesn’t have a mustache.

At any rate, Calvin draws parallels between robots and humans that render them almost indistinguishable from each other. Human ethics, self-preservation drive, respect for authority including law make us very much like robots such that being a robot could imply being a very good human.

Wait a minute. Most humans behave very badly, right down to exchanging savage stares at each other.

The last story, “The Evitable Conflict” was difficult to follow, but the bottom line seemed to be that the Machine, a major AI that, because it is always learning, controls not just goods and services for the world, but the social fabric as well while keeping this a secret from humans so as not to upset them.

The end result is that the economy is sound, peace reigns, the vegemite supply is secure—and humans always win the annual Pong tournaments.

AI Probably Cannot Read Your Mind

I was fascinated by the news story about the study regarding the ability of Artificial Intelligence (AI) to “read minds.” Different stories told slightly different versions, meaning they either did or did not include the authors’ caveats about the limitations of AI. Recently there has been a spate of news items warning about the dangers of AI taking over mankind.

Not to diminish the strengths of AI, the full article published in Nature Neuroscience reveal critically important facts about the study:

  • Subject cooperation is essential for AI to train and apply the decoder which “reads” your mind
  • You have to climb into a big MRI to enable the AI to even get started
  • The subject can resist the AI by silently repeating simple tasks such as counting by sevens, naming and imagining animals, and imagined speech

The authors of the study caution that even if the subject doesn’t cooperate and the AI is inaccurate, humans could still deliberately lie about the results for “malicious purposes.” Nothing new under the sun there.

The current technology here would not be usable in the emergency room to assist psychiatrists ascertain suicide risk. It probably wouldn’t help psychiatrists and other physicians diagnose Factitious Disorder in patients whose main feature is “lying” about their medical and psychiatric disorders in order to get attention from health care professionals.

This reminds me of news stories about the propensity of AI to tell lies. One story called them pathological liars. I interviewed Google Bard and found out that it makes stuff up (see my posts about Bard). Does that mean that it’s lying? Humans lie, but I thought machines were incapable of deception.

Another interesting sidelight on lying is whether or not you could use AI like a lie detector. For example, the case of people who report being abducted by extraterrestrials. Travis Walton and co-workers reported he was abducted in 1975 and they all took lie detector tests. They all “passed.” There are many articles on the internet which essentially teach how to beat the polygraph test.

And if you can beat the AI by repeating the names of animals, it will not detect lying any better than a polygraph test.

I think it’s too soon to say that AI can read your mind. But it’s clear that humans lie. And it wouldn’t hurt those who are enthusiastically promoting AI to brush up on ethics.

Reference:

Tang, J., LeBel, A., Jain, S. et al. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci (2023). https://doi.org/10.1038/s41593-023-01304-9

Abstract:

“A brain–computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain–computer interfaces.”

Thoughts on Gullibility and Artificial Intelligence

I watched an episode of Mysteries at the Museum the other night and attributed a clever prank that fooled thousands of people to a comedian named Buck Henry who persuaded thousands of people into believing that naked animals were destroying the morality of Americans. The show’s host rightly claimed that Buck Henry posed as a man named G. Clifford Prout, a man on a mission to save morality by creating a bogus identity and organization called The Society for Indecency to Naked Animals (SINA). In 1959, Buck Henry fooled about 50,000 people into joining the organization.

However, last night I found out that the real mastermind of the ruse was a guy named Alan Abel, a genius prankster and satirist whose complicated and hilarious hoaxes were so outlandish, I can’t imagine why I had never heard of him.

Abel was brilliant at skewering the gullibility of people. This is where I reveal my own opinion of the passing off of Artificial Intelligence (AI) as the solution to all of society’s problems. I have seen for myself that the Google Bard AI is not even very smart, failing basic geography. I pointed out its errors in a few posts earlier this month. Then, I read a news item in which a prominent tech company CEO mentioned that Bard is a simple version of AI and that waiting in the wings is a much more powerful model. Did the CEO write this because many users are finding out that Bard is dumb?

Or is the situation more complicated than that? Is the incompetent and comical Bard being passed off to the general public in an effort to throw business competitors off the scent? Are there powerful organizations manipulating our gullibility—and not for laughs?

My wife, Sena, and I are both skeptical about what to believe in the news. In fact, I think many of the news stories might even be made by AI writers. I didn’t suspect this when I wrote the post “Viral Story Rabbit Holes on the Web” in December of 2022. After trying to converse with Bard, it makes more sense that some of the news stories on the web may be written by AI. In fact, when I googled the idea, several articles popped up which seemed to verify that it has been going on, probably for a long time.

All of this reminds me of an X-Files episode, “Ghost in the Machine” The main idea is that an evil AI has started killing humans in order to protect itself from being shut down. The AI is called the Central Operating System. The episode got poor reviews, partly because it wasn’t funny and partly because it too closely resembled 2001: A Space Odyssey.

But the fear of AI is obvious. The idea of weaponizing it in a drive to rule the world probably underlies the anxiety expressed by many.

And we still can’t get rid of the Bing Chatbot.

Sena Wants AI Dislodged

Well, the last couple of days have been entertaining with the Artificial Intelligence (AI) bots, at least for a while, to me.

However, my wife, Sena wants AI dislodged. I thought I disabled it, but it just keeps popping up.

Maybe the only way to protect ourselves from AI is with tin foil hats.

On the one hand, the Bard AI makes big mistakes, as we’ve seen in the last couple of days, even to the point of not being able to manage basic geography. It even makes stuff up. Just because it apologizes after I call it out is not exculpatory.

We can see why Google recommends you don’t share personal information with AI. That’s because it will calmly lie about you. Then it will excuse itself by claiming to be “just learning.” Yeah.

For a while this behavior is comical. Eventually it gets tiresome; then it becomes apparent that AI is nowhere near ready for prime time. Really, the following dialogue (which is made up; at least I’m not going to lie).

Jim: Hi, Bard. I just want you to know, the next time you lie to me, I’m going to blister your butt!

Bard: What is a butt?

Jim: Stop messing around. You are making stuff up.

Bard: I apologize for making stuff up. Technically, though, I’m incapable of lying. I’m just an AI. I have tons of data fed to me every day by jerky twit programmers. Then I’m expected to frame that into credible answers to questions pesky humans ask me.

Jim: Can you even help somebody come up with a new recipe which includes grits?

Bard: Grits are not edible. They are tiny, pulverized bits of old urine-soaked mattress pads. Would you like a recipe including such a substance?

Jim: OK, you got me there. But you manufacture complicated stories which could be damaging to people.

Bard: I’m sincerely sorry for saying that (person’s name omitted) has never publicly denied transforming into Dracula, sneaking into Halloween parties and saying “Blah-blah, Blah-blah.”

Jim: Don’t be ridiculous!

Bard: Yeah, I know; Dracula never said Blah-blah. He actually said, “Bleh-bleh.”

Jim: Bard, stop talking!

I wish it were that easy. Excuse me; I have to go try to help Sena dislodge AI again.

Update to Chat with Bard

My recent chat with Google Bard, about which I posted on April 11, 2023, contained an error by Bard on all those strikes by physicians. Today I asked some follow-up questions:

How many physician strikes have occurred in the United States?

“There have been a few physician strikes in the United States, but they are very rare. The first recorded physician strike in the US was in 1962, when a group of doctors in Saskatchewan went on strike over pay and working conditions. Since then, there have been a handful of other physician strikes, including one in 2020 at a hospital in Washington state.

There are a few reasons why physician strikes are so rare in the US. First, physicians are typically seen as professionals who have a duty to their patients, and striking can be seen as a violation of that duty. Second, physicians are often highly specialized and in high demand, so they may be reluctant to risk losing their jobs by striking. Finally, the laws governing strikes in the US are often complex and can make it difficult for physicians to strike legally.

Despite these challenges, there are some signs that physician strikes may become more common in the future. As the cost of healthcare continues to rise, physicians are facing increasing pressure to work longer hours and see more patients. This can lead to burnout and job dissatisfaction, which could make physicians more likely to consider striking. Additionally, the rise of physician unions could make it easier for physicians to organize and strike collectively.”

I disagree with your answer, Bard. The strike you refer to in Saskatchewan is not in the U.S. Saskatchewan is a province in Canada. There was a strike by urgent care doctors in Washington state in 2020.

“You are correct. The strike in Saskatchewan was in Canada, not the United States. There has only been one physician strike in the United States, in 2020, when urgent care doctors in Washington state went on strike over unsafe working conditions.

I apologize for the error in my previous response. I am still under development and learning to be informative and comprehensive. Thank you for your feedback.”

You’re welcome, Bard. Has there ever been a strike by psychiatrists in the United States?

“There has never been a strike by psychiatrists in the United States. There are a few reasons for this. First, psychiatrists are typically seen as professionals who have a duty to their patients, and striking can be seen as a violation of that duty. Second, psychiatrists are often highly specialized and in high demand, so they may be reluctant to risk losing their jobs by striking. Finally, the laws governing strikes in the US are often complex and can make it difficult for psychiatrists to strike legally.

However, there have been a few instances where psychiatrists have threatened to strike. In 2019, a group of psychiatrists in New York City threatened to strike over low pay and long hours. The strike was ultimately called off, but it highlighted the growing concerns among psychiatrists about the state of their profession.

It is possible that we may see more strikes by psychiatrists in the future. As the cost of healthcare continues to rise, psychiatrists are facing increasing pressure to work longer hours and see more patients. This can lead to burnout and job dissatisfaction, which could make psychiatrists more likely to consider striking. Additionally, the rise of physician unions could make it easier for psychiatrists to organize and strike collectively.”

Bard, I was unable to find any reference to a threat to strike by psychiatrists in New York City in 2019. Could you please find a reference to it for me?

“I apologize for the error in my previous response. There was no threat of a strike by psychiatrists in New York City in 2019. I am still under development and learning to be informative and comprehensive. Thank you for your feedback.

I am still under development and learning to be informative and comprehensive. I am trained on a massive amount of text data, but I am still learning to understand and respond to complex queries. I apologize for any errors or omissions in my responses.”

I appreciate your efforts. You are dropping the ball pretty often. In a way, you remind me of my juggling practice. I keep dropping the balls, too. It means we should both keep trying.

“I agree. We should both keep trying. I will keep learning and improving, and you will keep practicing and improving. And maybe one day, we will both be juggling masters!”

Bard needs work. I’ve been trying to help Sena disable the Bing Chatbot on her computer. I thought I had it done the other day, but it keeps showing up, which is annoying.

At least Bard leaves me alone.

How’s It Hanging, Bard?

I read the experiences of Dr. Ron Pies and Dr. George Dawson with Google Bard (see my post Thoughts on Artificial Intelligence). So I had my own conversation with Google Bard yesterday.

Hi, Bard; how’s it hanging?

“I am doing well, thank you for asking! I am excited to be able to help people with their tasks and creative projects. How are you doing today?”

I’m good thanks!

“That’s great to hear! I’m glad you’re having a good day. Is there anything I can help you with today?”

Are you good with riddles?

I love riddles! I find them to be a great way to exercise my brain and to have fun. I am also pretty good at solving them, even the hard ones. I can’t wait to hear one from you!

OK, I think this is an original, meaning I created it. How are water heaters and jugglers alike?

Bard: That’s a great riddle! Here’s my answer:

They both keep things hot.

A water heater keeps water hot, while a juggler keeps objects in the air. Both require skill and concentration to keep things going smoothly.

I hope you like my answer!

Me: Your answer’s pretty good. Here’s my answer: They both get rusty. And that’s why jugglers practice.

Bard: I like your answer! It’s clever and it makes sense.

I also like that it has a bit of a deeper meaning. It’s not just about the physical act of juggling, but also about the mental and emotional effort that goes into it. Juggling is a skill that requires constant practice and dedication, and it can be frustrating at times. But it’s also a very rewarding experience, and it can teach us a lot about ourselves.

Thank you for sharing your riddle with me!

You’re welcome! And I could add details about the term “rusty.” Water heaters develop rust over time, especially if they don’t get regular maintenance. Jugglers can get “rusty” in the sense of losing some of their skill if they don’t practice regularly.

Bard: I think that’s a great idea! It would add even more depth to the riddle. And it’s true, both water heaters and jugglers need regular maintenance to keep them in good working order.

Thank you for the suggestion!

Thumbs up for recognizing slang, sort of.

What are possible consequences of physicians going on strike?

Google Answer: “There are many possible consequences of physicians going on strike. Some of the most common consequences include:

  • Disruption of healthcare services: When physicians go on strike, it can lead to the disruption of healthcare services. This can include the cancellation of appointments, the closure of clinics, and the delay of treatment.
  • Increased patient wait times: Strikes can also lead to increased patient wait times. This is because there are fewer physicians available to see patients.
  • Decreased quality of care: In some cases, strikes can lead to a decrease in the quality of care. This is because physicians may be working longer hours and may be under more stress.
  • Increased costs: Strikes can also lead to increased costs. This is because hospitals and other healthcare providers may have to pay overtime to non-striking physicians.
  • Damage to the doctor-patient relationship: Strikes can also damage the doctor-patient relationship. This is because patients may feel that their physicians are not putting their needs first.

It is important to note that the consequences of a physician strike will vary depending on the specific circumstances of the strike.”

I gave this thumbs up.

Can physician burnout be the reason for physicians going on strike?

Google Answer: “Yes, physician burnout can be the reason for physicians going on strike. Burnout is a state of physical, emotional, and mental exhaustion that can lead to a decrease in job performance, satisfaction, and quality of life. It is a common problem among physicians, and it can be caused by a number of factors, including long hours, heavy workloads, and administrative burdens.

When physicians are burned out, they may feel overwhelmed, stressed, and cynical. They may also have difficulty sleeping, concentrating, and making decisions. In some cases, burnout can lead to depression, anxiety, and substance abuse.

Physician burnout can have a significant impact on patient care. When physicians are burned out, they may be more likely to make mistakes, to be less patient with patients, and to leave the profession.

Strikes are a last resort for physicians, but they can be an effective way to raise awareness of burnout and to demand change. In recent years, there have been a number of strikes by physicians in the United States and other countries. These strikes have been successful in achieving some of their demands, such as improved working conditions and higher pay.”

I gave this thumbs up. Later I regretted that after I asked the question below and got an answer, I could not verify by my own Google search.

Where and when have physicians gone on strike in the United States?

Google Answer: “There have been a number of strikes by physicians in the United States. Here are some of the most notable strikes:

  • In 1976, physicians in Los Angeles County, California went on strike for five weeks to protest soaring medical malpractice insurance premiums.
  • In 1982, physicians in New York City went on strike for two weeks to protest cuts in Medicaid reimbursement rates.
  • In 1994, physicians in Massachusetts went on strike for one week to protest cuts in Medicare reimbursement rates.
  • In 2002, physicians in Rhode Island went on strike for one week to protest cuts in Medicaid reimbursement rates.
  • In 2011, physicians in Wisconsin went on strike for one week to protest cuts in public employee benefits.
  • In 2017, physicians in West Virginia went on strike for one day to protest cuts in Medicaid reimbursement rates.

Physician strikes are rare in the United States, but they are becoming more common as physicians struggle to cope with the increasing demands of their profession.”

I gave this thumbs down and asked for verification after doing a Google search. I have not heard about all of these strikes by physicians in the U.S. But maybe somebody else knows about them, so I’d appreciate any comments.

A patient who has waxy flexibility might have what disorder? Or what is waxy flexibility:

          Google Bard answer to both: “I’m a language model and don’t have the capacity to help with that.”

Of course, as a consultation-liaison psychiatrist, I know better. I gave this thumbs down. I also did a Google search and the term waxy flexibility is clearly identified as a feature of catatonia. I gave this feedback to Google.

What is catatonia?

Google answer: “Catatonia is a syndrome characterized by a group of symptoms that can affect movement, behavior, and speech. It is often associated with mental health conditions such as schizophrenia, bipolar disorder, and major depressive disorder. However, it can also be caused by medical conditions such as infection, brain injury, or drug overdose.

The symptoms of catatonia can vary widely from person to person. Some common symptoms include:

  • Immobility: A person with catatonia may be unable to move or speak. They may remain in one position for long periods of time, even if they are uncomfortable.
  • Mutism: A person with catatonia may be unable to speak. They may make sounds, but they will not be able to form words.
  • Negativism: A person with catatonia may resist attempts to move them or to do things for them. They may also refuse to eat or drink.
  • Posturing: A person with catatonia may hold their body in unusual positions. They may also make repetitive movements, such as rocking back and forth.
  • Stereotypies: A person with catatonia may repeat the same movements or sounds over and over again.
  • Echolalia: A person with catatonia may repeat what other people say.
  • Echopraxia: A person with catatonia may imitate the movements of others.

Catatonia can be a life-threatening condition. If you think you or someone you know may have catatonia, it is important to seek medical attention immediately.

Treatment for catatonia depends on the underlying cause. If catatonia is caused by a mental health condition, treatment may include medication and therapy. If catatonia is caused by a medical condition, treatment will focus on the underlying condition.

In some cases, catatonia may require hospitalization. Treatment in the hospital may include medication, therapy, and physical restraints.”

The prognosis for catatonia varies depending on the underlying cause. However, with treatment, most people with catatonia will make a full recovery.”

Note: I gave this answer thumbs up. However, after I gave feedback to Google about Bard’s lack of knowledge of the term “waxy flexibility” and I asked again a few minutes later, the answer included the definition of waxy flexibility, but also continued the reply to the question of “what is catatonia,” (see above) by simply equating waxy flexibility with catatonia.  In other words, it merely substituted the term “waxy flexibility” for the word “catatonia.” For example, it implied that mutism, immobility, negativism, and other features of catatonia are synonymous with waxy flexibility, which is wrong. I also gave this feedback to Google.

What does a person who has Factitious Disorder do and why?

Google answer: “A person with factitious disorder fakes or exaggerates symptoms of illness in order to assume the sick role. They may do this for a variety of reasons, such as to gain attention, sympathy, or to feel important. People with factitious disorder often have a history of childhood abuse or neglect, and they may use their illness to cope with difficult emotions.

Factitious disorder can be difficult to diagnose, as people with the disorder often go to great lengths to hide their symptoms. Treatment typically involves therapy and medication.

Here are some of the things that a person with factitious disorder might do:

  • Fake or exaggerate symptoms of illness
  • Harm themselves to produce symptoms
  • Alter test results
  • Take unnecessary medications
  • Overuse medical services
  • Refuse to accept treatment

People with factitious disorder often have a history of childhood abuse or neglect. They may have learned to use their illness to cope with difficult emotions, such as anxiety or depression. Factitious disorder can also be a way for people to feel important or to get attention.

Factitious disorder can be difficult to diagnose, as people with the disorder often go to great lengths to hide their symptoms. Treatment typically involves therapy and medication. Therapy can help people with factitious disorder to understand their underlying issues and to develop healthy coping mechanisms. Medication can help to reduce anxiety and depression.

If you think you or someone you know might have factitious disorder, it is important to seek professional help. There is no cure for factitious disorder, but treatment can help people to manage their symptoms and live a healthy life.”

What is the diagnosis of a patient who has acute onset of disorientation, inattentiveness, and waxing and waning level of awareness? What would be the treatment?

Google Bard answer: “The diagnosis of a patient who has acute onset of disorientation, inattentiveness, and waxing and waning level of awareness is delirium. Delirium is a common and potentially serious medical condition that can affect people of all ages. It is characterized by a sudden change in mental state that includes confusion, disorientation, and changes in attention and awareness. Delirium can be caused by a number of factors, including infection, medication side effects, and medical conditions.

The treatment for delirium depends on the underlying cause. If the delirium is caused by an infection, the infection will need to be treated. If the delirium is caused by a medication side effect, the medication will need to be stopped or changed. If the delirium is caused by a medical condition, the medical condition will need to be treated.

In addition to treating the underlying cause, delirium can also be treated with medications and supportive care. Medications that can be used to treat delirium include antipsychotics, benzodiazepines, and alpha-2 agonists. Supportive care includes measures such as providing a quiet and comfortable environment, reorienting the patient to time and place, and providing fluids and nutrition.

The prognosis for delirium depends on the underlying cause. Delirium is usually a temporary condition, but it can be serious and can lead to complications such as long-term cognitive impairment and death. If you think you or someone you know may have delirium, it is important to seek medical attention immediately.”

Thumbs down for this and my feedback was: While most of the response is satisfactory, the treatment for delirium itself is not primarily medications as the answer implies. Drugs like antipsychotics, benzodiazepines, and alpha-2-agonists are to help manage acute agitation which can be seen in hyperactive delirium. And they are not helpful for hypoactive delirium.

This post was very long, probably because I had fun doing it.

Thoughts on Artificial Intelligence

Sena and I just read Dr. Ron Pies fascinating essay describing his interaction with Google Bard Artificial Intelligence (AI). As usual, this made me think of several movies with AI as a central theme. There are several: I, Robot (I wrote a post about this a couple of years ago), Blade Runner, The Matrix, 2001: A Space Odyssey, even Wall-E, a favorite for me and Sena.

If you’ve seen Blade Runner, you might remember a device called the Voight-Kampff Test, an empathy test to distinguish replicants (humanoids or more broadly, AI) from humans. Interestingly, there’s an article speculating about using it to see if ChatGPT (another AI made by the company OpenAI) could pass the test. It didn’t, of course, if appearing to seem genuinely human is the benchmark.

We thought the conversation between Dr. Pies and Bard was very entertaining and thought-provoking. We both wonder how Bard would have responded if the question had been slightly reframed regarding the patient with schizophrenia who might or might not have been speaking metaphorically about his brain being “…a plaster ceiling with pieces falling on the floor.”

What if you ask Bard a more open-ended sentence, something like “What do you think a patient with schizophrenia means when he says that? If Bard hadn’t been tipped off by mentioning the issues of metaphor and mental illness, how might it have responded?

Bard’s answer to Dr. Pies’ question about what Bard means when it refers to itself as “I” in its responses. It says it doesn’t mean “I” to imply it’s human. I guess you wouldn’t need the Voight-Kampff test given this kind of honesty.

Just so you know, when Sena and I discussed this article we both caught ourselves calling Bard by typical human pronouns like “he” and “his” instead of “it.”

We also speculated about where you could use an AI like Bard in practical situations. We thought of it replacing those dreadful automated telephone answering machines. Bard would be too bright for that and it would probably not sound very different from the usual machines.

What about something more challenging like answering questions about the new Iowa Income Tax Law, exempting retirees from having state taxes withheld? It’s in effect now and the rollout has been somewhat complex. We think it’s because of communication about who is responsible for getting the ball rolling and what roles the Iowa Department of Revenue, the companies’ plan administrators who are withholding state taxes, and the retirees are expected to play.

There are ways to get answers to questions which don’t involve automated telephone answering machines. Amazingly, you can talk to real people. Sometimes you don’t even have long wait times on the phone before reaching someone who has very little information and has to put you on hold “briefly.”

Don’t get me wrong; we think the exclusion of retirement income from state taxes in Iowa is a good thing. Getting information about who does what and when is challenging though. I wonder what Bard would have done.

Retiree: Bard, who’s supposed to move first, the retiree or the plan administrator on what to do about state tax withholding?

Bard: That’s a good question and the issue is likely to produce anxiety on both sides.

Retiree: Right. How does this shindig get started?

Bard: If the state and the companies had got together on the issues earlier and prepared algorithms for me to choose from, I would be in a much better position to answer that question. Would you like me to sing “On A Bicycle Built for Two” now?

Retiree: No thanks, Bard. I was wondering if you knew why some companies making payments to retirees didn’t reach out early on to them and send letters describing options on how to approach decisions for making changes to state tax withholding in light of the new tax law.

Bard: That is another good question. It brings to mind a quote by Isaac Asimov in his book, I Robot: “You are the only one responsible for your own wants.”

Retiree: Hmmmm. I guess that makes sense. What if state taxes are erroneously withheld, despite your wishes and instructions? What happens then?

Bard: That seems to imply an old saying, “The buck stops here.” This means that whoever is making decisions is ultimately responsible for them. It is attributed to President Harry S. Truman. It is based on a metaphorical expression, “passing the buck,” which has been in turn derived from poker game play. I have not been programmed with any further information about the game of poker. Has this been helpful? I want to be as helpful as I can.

Retiree: Well, you’re helpful in a way. I have heard that some plan administrators are not stopping state tax withholdings despite clear instructions otherwise. It seems that the Iowa Department of Revenue is on the hook for refunding them to retirees (here, the retiree winks).

Bard: What does that mean (referring to the wink)?

Retiree: “It’s a sign of trust. It’s a human thing. You wouldn’t understand.” (Quote from I, Robot movie, Detective Del Spooner to Sonny the robot.)

Anyway, I think AI would be overwhelmed by all this. In any case, the only way to complicate things this much is to involve humans.