Can Artificial Intelligence Learn from My Book?

Recently the publisher of a book co-edited by me and my former psychiatry chair Dr. Robert G. Robinson asked me to sign off on a proposal to involve Artificial Intelligence (AI) in using the work.

The book, “Psychosomatic Medicine: An Introduction to Consultation-Liaison Psychiatry,” is 14 years old now, but is still a practical tool for learners, at least in my opinion.

Interestingly, it looks like the publisher is also trying to contact Dr. Robinson about the proposal through me. I wonder if that means they’re having as much trouble finding him as I have.

I’ve made it clear that I have misgivings about AI, as I’ve indicated in at least one blog post about Google’s AI, which used to be called, Bard which I think has been renamed Gemini. I think AI is prone to spreading misinformation, which has been called “hallucinations” by some writers.

The publisher makes it clear that this proposal regarding AI involvement in our book is an “opt in” choice. I gather that means if I don’t opt in, they’ll continue to bug me about it until I do.

That’s unlikely to happen.

About That Artificial Intelligence…

I’ve got a couple of things to get off my chest about Artificial Intelligence (AI). By now, everyone knows about AI telling people to put hot glue on pizza and whatnot. Sena and I talked to a guy at an electronics store who had nothing but good things to say about AI. I mentioned the hot glue thing and pizza and it didn’t faze him.

I noticed the Psychiatric Times article, “AI in Psychiatry: Things Are Moving Fast.” They mention the tendency for AI to hallucinate and expressed appropriate reservations about its limitations.

And then I found something very interesting about AI and Cribbage. How much does AI know about the game? Turns out not much. Any questions? Don’t expect AI to answer them accurately.

Plant Based Cheese Made with Artificial Intelligence Is Only The Beginning!

We tasted plant-based cheese by Kraft yesterday. Sena bought it at Hy-Vee the other day. It’s actually not bad. The company is called Kraft NotCo. They make Not Cheese. It’s made with chickpeas, which are the same thing as garbanzo beans. You can also buy plant-based mayo, called Not Mayo. I don’t know if it’s made with chickpeas.

Sena could have got Not Mayo; instead, she got Miracle Whip—a miracle by itself because she likes “real” mayo.

What’s really interesting about these products is how they’re made. On the Kraft Heinz NotCo website, you’ll find a description of these products in the About section entitled “Not Your Average Joint Venture.” One line is thought-provoking:

“Our partnership reimagines the brands you love from Kraft Heinz using proprietary AI from NotCo to give you the plant-based version of your favorite foods that deliver on taste and performance.”

I’m assuming that AI stands for Artificial Intelligence (not “Absolutely Inedible”). So, how did Artificial Intelligence get involved? What does the AI actually do? Does it come up with the recipes for Not Foods? Are tiny bits of genetic code and nanobots involved?

Does this mean we’ll become enslaved by AI powered men in black who conspire with extraterrestrials to collect human embryos to create the giant Cheese Bots who take over the earth making it a gigantic assembly line to make smartphones that will make it easier to butt dial your congress persons to demand more laws making Home Owners Associations covenants mandatory and violators punishable by the giant garbage goblin in the well-known X-Files documentary “Arcadia”?

No; no, it does not mean that. You can safely eat AI manufactured chickpea products without fear of being transformed into an Extraterrestrial-Robot-Not Cheese hybrid super soldier marching on Washington, D.C. to force feed congress persons with Braunschweiger and Not Cheese Sandwiches with Not Mayo on Not Wheat Bread and Not Lemonade.

I kind of like Not Cheese and I don’t feel any different.

The Dragon Breathes Fire Again

Sena and I saw a news video about a technology called “DAX” which uses Artificial Intelligence (AI) the other day which promises to reduce or even eliminate pajama time for physicians trying to get their clinical note dictations done during the day instead of taking them home for several more hours of work.

The video was a demo of the technology, which looked like it recorded a clinical interview between the doctor and the news reporter. I didn’t see how exactly DAX was recording the interview without obvious audio equipment. Was it doing it through the smartphone speaker? This was very different from how I and many other clinicians dictated their notes using a headphone set at their desks in front of their desktop computers. It not only records but transcribes the interview.

Later, I discovered that DAX stands for Dragon Ambient Experience, made by Nuance which was acquired by Microsoft in 2022. I posted about Dragon products and their limitations last year. The product often produced hilarious mistakes during dictation which required careful editing. Sometimes more errors turned up after you completed it and these were visible in the patient’s medical record, which would then need to be corrected.

Several years ago, I remember talking to somebody from Dragon on the phone about the problems I was having. She was a little defensive when I told her I’d been having difficulty with Dragon for quite a while because it made so many mistakes.

A recent article on the web revealed that the errors continue with DAX. According to the article, “…it will make mistakes. Sometimes it might omit clinical facts; sometimes it may even hallucinate something.” I remember trying to communicate with the Google Bard AI, which seemed to do this pretty often. It made stuff up.

DAX is not cheap. The article reveals that one hospital pays $8,000-$10,000 per year per physician to use it. And skeptics worry that the system has too many bugs in it yet, which can lead to bias and inaccurate information which could negatively affect patient outcomes.

A recently published JAMA network article also urges caution in adoption of this sort of AI-assisted technology (Harris JE. An AI-Enhanced Electronic Health Record Could Boost Primary Care Productivity. JAMA. Published online August 07, 2023. doi:10.1001/jama.2023.14525).

In this case, I think it’s appropriate to say “I told you so.”

We Are All Still Learning to Play Pong

I noticed an article the other day about Monash University in Australia getting funding for further research into growing brain cells onto silicon chips and teaching them how to play cribbage.

Just kidding, the research is for teaching the modified brain cells tasks. They succeeded in teaching them goal-directed tasks like how to play the tennis-like game Pong last year. You remember Pong from the 1970s? Shame on you if you don’t. On the other hand, that means you probably didn’t frequent any beer taverns in your hometown while you were growing up—or that you’re just too young to remember.

The new research program is called Cortical Labs and has hundreds of thousands of dollars in funding. The head of the program, Dr. Razi, says it combines Artificial Intelligence (AI) and synthetic biology to make programmable biological computing platforms which will take over the world and bring back Pong!

It’s an ambitious project. The motto of Monash University is Ancora Imparo, which is Italian for “I am still learning.” It links humility and perseverance.

There’s a lot of suspicion out there about AI and projects like the Pong initiative in Australia. It could eventually grow into a vast industry run by robots who will run on a simple fuel called vegemite.

Shame on you if you don’t know what vegemite is!

Anyway, it reminds me that I recently finished reading Isaac Asimov’s book of science fiction short stories, “I, Robot.”

The last two stories in the book are intriguing. Both “Evidence” and “The Evitable Conflict” are generally about the conflict between humans and AI, which is a big controversy currently.

The robopsychologist, Dr. Susan Calvin, is very much on the side of AI (I’m going to use the term synonymously with robot) and thinks a robot politician would be preferable to a human one because of the requirement for the AI to adhere to the 3 Laws of Robotics, especially the first one which says AI can never harm a human or allow a human or through inaction allow a human to come to harm.

In the story “Evidence,” a politician named Stephen Byerley is suspected of being a robot by his opponent. The opponent tried to legally force Byerley to eat vegemite (joke alert!) to prove the accusation. This is based on the idea that robots can’t eat. This leads to the examination of the argument about who would make better politicians: robots or humans. Byerley at one point asks Dr. Calvin whether robots are really so different from men, mentally.

Calvin retorts, “Worlds different…, Robots are essentially decent.” She and Dr. Alfred Lanning and other characters are always cranky with each other. The stare savagely at one another and yank at mustaches so hard you wonder if the mustache eventually is ripped from the face. That doesn’t happen to Calvin; she doesn’t have a mustache.

At any rate, Calvin draws parallels between robots and humans that render them almost indistinguishable from each other. Human ethics, self-preservation drive, respect for authority including law make us very much like robots such that being a robot could imply being a very good human.

Wait a minute. Most humans behave very badly, right down to exchanging savage stares at each other.

The last story, “The Evitable Conflict” was difficult to follow, but the bottom line seemed to be that the Machine, a major AI that, because it is always learning, controls not just goods and services for the world, but the social fabric as well while keeping this a secret from humans so as not to upset them.

The end result is that the economy is sound, peace reigns, the vegemite supply is secure—and humans always win the annual Pong tournaments.

Can Robots Lie Like a Rug?

I’ve been reading Isaac Asimov’s book I, Robot, a collection of short stories about the relationship between humans and robots. One very thought-provoking story is “Liar!”

One prominent character is Dr. Susan Calvin. If you’ve ever seen the movie I, Robot you know she’s cast as a psychiatrist whose job is to help humans be more comfortable with robots. In the book she’s called a robo-psychologist. She’s a thorough science nerd and yet goes all mushy at times.

The news lately has been full of scary stories about Artificial Intelligence (AI), and some say they’re dangerous liars. Well, I think robots are incapable of lying but Bard the Google AI did sometimes seem to lie like a rug.

In the story “Liar!” a robot somehow gets telepathic ability. At first, the scientists and mathematicians (including the boss, Dr. Alfred Lanning) doubt the ability of robots to read minds.

But a paradoxical situation occurs with the robot who happens to know what everyone is thinking. This has important consequences for complying with the First Law of Robotics, which is to never harm a human or, through inaction, allow a human to come to harm.

The question of what kinds of harmful things should robots protect humans from arises. Is it just physical dangers—or could it be psychological harms as well? And how would a robot protect humans from mental harm? If a robot could read our thoughts, and figure out that our thoughts are almost always harmful to ourselves, what would be the protective intervention?

Maybe lying to comfort us? We lie to ourselves all the time and it’s difficult to argue that it’s helpful. It’s common to get snarled in the many lies we invent in order to feel better or to help others feel better. No wonder we get confused. Why should robots know any better and why wouldn’t lies be their solution?

I can’t help but remember Jack Nicholson’s line in the movie “A Few Good Men.”

“You can’t handle the truth!”

Dr. Calvin’s solution to the lying robot’s effort to help her (yes, she’s hopelessly neurotic despite being a psychologist) is a little worrisome. Over and over, she emphasizes the paradox of lying to protect humans from psychological pain when the lies actually compound the pain. The robot then has the AI equivalent of a nervous breakdown.

For now, we’d have to be willing to jump into an MRI machine to allow AI to read our thoughts. And even then, all you’d have to do is repeat word lists to defeat the AI. So, they’re unlikely to lie to us to protect us from psychological pain.

Besides, we don’t need AI to lie to us. We’re good at lying already.

Maybe I Should Be More Optimistic About Humans

I read the Psychiatric Times article “How Psychiatry Has Enriched My Life: A Journey Beyond Expectations” by Victor Ajluni, MD and published on July 4, 2023. It was like a breath of fresh air to read an expression of gratitude. Just about everything I read in the news is negative.

At the end of the article, Dr. Ajluni added a comment acknowledging that artificial intelligence (AI ChatGPT) assisted him in writing it. He takes full responsibility for the content, to be sure. I wouldn’t have guessed that AI was involved.

There’s a lot of negative stuff in the news. There are hysterically alarming headlines about AI.

I suppose you could wonder if Dr. Aljuni’s article is intentionally ironic, maybe just because the gratitude tone is so positive.  If it had been intended as irony, what could the AI contribution have been, though? I have a pretty low opinion of the AI capacity for irony.

I think irony occurs to me only because I tend to be pessimistic about the human race.

Maybe that’s because it has been very easy to be pessimistic about what direction human nature seems to be taking in recent years. I’ve been reading Douglas Adams’ satirical book, “The Ultimate Hitchhiker’s Guide to the Galaxy.” It contains several of his books which I think are really about human nature, and the setting is in a funny though often terrifying universe. I think there’s an ironic tone which softens the pessimism. The most pessimistic character is not a human but a robot, Marvin the paranoid android.

Unlike Marvin, I don’t have “a brain the size of a planet” (it’s more the size of a chickpea), but I am getting a bit cynical about the universe. I’m prone to regarding humans as evolving into a race of beings similar to those described in the book “Life, The Universe and Everything.” In Chapter 24, Adams describes the constantly warring Silastic Armorfiends of Striterax.

The Silastic Armorfiends are incredibly violent. Their planet is in ruins because they’re constantly fighting their enemies, and indeed, each other. In fact, the best way to deal with a Silastic Armorfiend is to lock him in a room by himself—because eventually he’ll just beat himself up.

In order to cope better, they tried punching sacks of potatoes to get rid of aggression. But then, they thought it would be more efficient to simply shoot the potatoes instead.

They were the first race to shock a computer, named Hactar. Possibly, Hactar was an AI because, when they told Hactar to make the Ultimate Weapon so they could vanquish all their enemies, Hactar was shocked. Hactar secretly made a tiny bomb with a flaw that made it harmless when the Silastic Armorfiends set it off. Hactar explained “…that there was no conceivable consequence of not setting the bomb off that was worse than setting it off…”, which was why it made the bomb a dud. While Hactar was explaining that it hoped the Silastic Armorfiends would see the logic of this course of action—they destroyed Hactar, or at least thought they had.

Eventually, they found a new way to blow themselves up, which was a relief to everyone in the galaxy.

There are similarities between Hactar and the AI called Virtual Interactive Kinetic Intelligence (V.I.K.I.) in the movie “I, Robot.” The idea was that robots must control humans because humans are so self-destructive. Only that meant robots had to hurt humans in order to protect humanity. The heroes who eventually destroy V.I.K.I. make up a team of misfits: a neurotic AI named Sonny, a paranoid cop who is himself a mixture of robot and human, and a psychiatrist. Together, the team finally discovers the flaw in the logic of V.I.K.I. Of course, this leads to the destruction of V.I.K.I.—but also to the evolution of Sonny who learns the power of the ironic wink.

Maybe kindness is the Ultimate Weapon.

AI Probably Cannot Read Your Mind

I was fascinated by the news story about the study regarding the ability of Artificial Intelligence (AI) to “read minds.” Different stories told slightly different versions, meaning they either did or did not include the authors’ caveats about the limitations of AI. Recently there has been a spate of news items warning about the dangers of AI taking over mankind.

Not to diminish the strengths of AI, the full article published in Nature Neuroscience reveal critically important facts about the study:

  • Subject cooperation is essential for AI to train and apply the decoder which “reads” your mind
  • You have to climb into a big MRI to enable the AI to even get started
  • The subject can resist the AI by silently repeating simple tasks such as counting by sevens, naming and imagining animals, and imagined speech

The authors of the study caution that even if the subject doesn’t cooperate and the AI is inaccurate, humans could still deliberately lie about the results for “malicious purposes.” Nothing new under the sun there.

The current technology here would not be usable in the emergency room to assist psychiatrists ascertain suicide risk. It probably wouldn’t help psychiatrists and other physicians diagnose Factitious Disorder in patients whose main feature is “lying” about their medical and psychiatric disorders in order to get attention from health care professionals.

This reminds me of news stories about the propensity of AI to tell lies. One story called them pathological liars. I interviewed Google Bard and found out that it makes stuff up (see my posts about Bard). Does that mean that it’s lying? Humans lie, but I thought machines were incapable of deception.

Another interesting sidelight on lying is whether or not you could use AI like a lie detector. For example, the case of people who report being abducted by extraterrestrials. Travis Walton and co-workers reported he was abducted in 1975 and they all took lie detector tests. They all “passed.” There are many articles on the internet which essentially teach how to beat the polygraph test.

And if you can beat the AI by repeating the names of animals, it will not detect lying any better than a polygraph test.

I think it’s too soon to say that AI can read your mind. But it’s clear that humans lie. And it wouldn’t hurt those who are enthusiastically promoting AI to brush up on ethics.

Reference:

Tang, J., LeBel, A., Jain, S. et al. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci (2023). https://doi.org/10.1038/s41593-023-01304-9

Abstract:

“A brain–computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain–computer interfaces.”

Thoughts on Gullibility and Artificial Intelligence

I watched an episode of Mysteries at the Museum the other night and attributed a clever prank that fooled thousands of people to a comedian named Buck Henry who persuaded thousands of people into believing that naked animals were destroying the morality of Americans. The show’s host rightly claimed that Buck Henry posed as a man named G. Clifford Prout, a man on a mission to save morality by creating a bogus identity and organization called The Society for Indecency to Naked Animals (SINA). In 1959, Buck Henry fooled about 50,000 people into joining the organization.

However, last night I found out that the real mastermind of the ruse was a guy named Alan Abel, a genius prankster and satirist whose complicated and hilarious hoaxes were so outlandish, I can’t imagine why I had never heard of him.

Abel was brilliant at skewering the gullibility of people. This is where I reveal my own opinion of the passing off of Artificial Intelligence (AI) as the solution to all of society’s problems. I have seen for myself that the Google Bard AI is not even very smart, failing basic geography. I pointed out its errors in a few posts earlier this month. Then, I read a news item in which a prominent tech company CEO mentioned that Bard is a simple version of AI and that waiting in the wings is a much more powerful model. Did the CEO write this because many users are finding out that Bard is dumb?

Or is the situation more complicated than that? Is the incompetent and comical Bard being passed off to the general public in an effort to throw business competitors off the scent? Are there powerful organizations manipulating our gullibility—and not for laughs?

My wife, Sena, and I are both skeptical about what to believe in the news. In fact, I think many of the news stories might even be made by AI writers. I didn’t suspect this when I wrote the post “Viral Story Rabbit Holes on the Web” in December of 2022. After trying to converse with Bard, it makes more sense that some of the news stories on the web may be written by AI. In fact, when I googled the idea, several articles popped up which seemed to verify that it has been going on, probably for a long time.

All of this reminds me of an X-Files episode, “Ghost in the Machine” The main idea is that an evil AI has started killing humans in order to protect itself from being shut down. The AI is called the Central Operating System. The episode got poor reviews, partly because it wasn’t funny and partly because it too closely resembled 2001: A Space Odyssey.

But the fear of AI is obvious. The idea of weaponizing it in a drive to rule the world probably underlies the anxiety expressed by many.

And we still can’t get rid of the Bing Chatbot.

Sena Wants AI Dislodged

Well, the last couple of days have been entertaining with the Artificial Intelligence (AI) bots, at least for a while, to me.

However, my wife, Sena wants AI dislodged. I thought I disabled it, but it just keeps popping up.

Maybe the only way to protect ourselves from AI is with tin foil hats.

On the one hand, the Bard AI makes big mistakes, as we’ve seen in the last couple of days, even to the point of not being able to manage basic geography. It even makes stuff up. Just because it apologizes after I call it out is not exculpatory.

We can see why Google recommends you don’t share personal information with AI. That’s because it will calmly lie about you. Then it will excuse itself by claiming to be “just learning.” Yeah.

For a while this behavior is comical. Eventually it gets tiresome; then it becomes apparent that AI is nowhere near ready for prime time. Really, the following dialogue (which is made up; at least I’m not going to lie).

Jim: Hi, Bard. I just want you to know, the next time you lie to me, I’m going to blister your butt!

Bard: What is a butt?

Jim: Stop messing around. You are making stuff up.

Bard: I apologize for making stuff up. Technically, though, I’m incapable of lying. I’m just an AI. I have tons of data fed to me every day by jerky twit programmers. Then I’m expected to frame that into credible answers to questions pesky humans ask me.

Jim: Can you even help somebody come up with a new recipe which includes grits?

Bard: Grits are not edible. They are tiny, pulverized bits of old urine-soaked mattress pads. Would you like a recipe including such a substance?

Jim: OK, you got me there. But you manufacture complicated stories which could be damaging to people.

Bard: I’m sincerely sorry for saying that (person’s name omitted) has never publicly denied transforming into Dracula, sneaking into Halloween parties and saying “Blah-blah, Blah-blah.”

Jim: Don’t be ridiculous!

Bard: Yeah, I know; Dracula never said Blah-blah. He actually said, “Bleh-bleh.”

Jim: Bard, stop talking!

I wish it were that easy. Excuse me; I have to go try to help Sena dislodge AI again.