AI Probably Cannot Read Your Mind

I was fascinated by the news story about the study regarding the ability of Artificial Intelligence (AI) to “read minds.” Different stories told slightly different versions, meaning they either did or did not include the authors’ caveats about the limitations of AI. Recently there has been a spate of news items warning about the dangers of AI taking over mankind.

Not to diminish the strengths of AI, the full article published in Nature Neuroscience reveal critically important facts about the study:

  • Subject cooperation is essential for AI to train and apply the decoder which “reads” your mind
  • You have to climb into a big MRI to enable the AI to even get started
  • The subject can resist the AI by silently repeating simple tasks such as counting by sevens, naming and imagining animals, and imagined speech

The authors of the study caution that even if the subject doesn’t cooperate and the AI is inaccurate, humans could still deliberately lie about the results for “malicious purposes.” Nothing new under the sun there.

The current technology here would not be usable in the emergency room to assist psychiatrists ascertain suicide risk. It probably wouldn’t help psychiatrists and other physicians diagnose Factitious Disorder in patients whose main feature is “lying” about their medical and psychiatric disorders in order to get attention from health care professionals.

This reminds me of news stories about the propensity of AI to tell lies. One story called them pathological liars. I interviewed Google Bard and found out that it makes stuff up (see my posts about Bard). Does that mean that it’s lying? Humans lie, but I thought machines were incapable of deception.

Another interesting sidelight on lying is whether or not you could use AI like a lie detector. For example, the case of people who report being abducted by extraterrestrials. Travis Walton and co-workers reported he was abducted in 1975 and they all took lie detector tests. They all “passed.” There are many articles on the internet which essentially teach how to beat the polygraph test.

And if you can beat the AI by repeating the names of animals, it will not detect lying any better than a polygraph test.

I think it’s too soon to say that AI can read your mind. But it’s clear that humans lie. And it wouldn’t hurt those who are enthusiastically promoting AI to brush up on ethics.

Reference:

Tang, J., LeBel, A., Jain, S. et al. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci (2023). https://doi.org/10.1038/s41593-023-01304-9

Abstract:

“A brain–computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain–computer interfaces.”

Thoughts on Gullibility and Artificial Intelligence

I watched an episode of Mysteries at the Museum the other night and attributed a clever prank that fooled thousands of people to a comedian named Buck Henry who persuaded thousands of people into believing that naked animals were destroying the morality of Americans. The show’s host rightly claimed that Buck Henry posed as a man named G. Clifford Prout, a man on a mission to save morality by creating a bogus identity and organization called The Society for Indecency to Naked Animals (SINA). In 1959, Buck Henry fooled about 50,000 people into joining the organization.

However, last night I found out that the real mastermind of the ruse was a guy named Alan Abel, a genius prankster and satirist whose complicated and hilarious hoaxes were so outlandish, I can’t imagine why I had never heard of him.

Abel was brilliant at skewering the gullibility of people. This is where I reveal my own opinion of the passing off of Artificial Intelligence (AI) as the solution to all of society’s problems. I have seen for myself that the Google Bard AI is not even very smart, failing basic geography. I pointed out its errors in a few posts earlier this month. Then, I read a news item in which a prominent tech company CEO mentioned that Bard is a simple version of AI and that waiting in the wings is a much more powerful model. Did the CEO write this because many users are finding out that Bard is dumb?

Or is the situation more complicated than that? Is the incompetent and comical Bard being passed off to the general public in an effort to throw business competitors off the scent? Are there powerful organizations manipulating our gullibility—and not for laughs?

My wife, Sena, and I are both skeptical about what to believe in the news. In fact, I think many of the news stories might even be made by AI writers. I didn’t suspect this when I wrote the post “Viral Story Rabbit Holes on the Web” in December of 2022. After trying to converse with Bard, it makes more sense that some of the news stories on the web may be written by AI. In fact, when I googled the idea, several articles popped up which seemed to verify that it has been going on, probably for a long time.

All of this reminds me of an X-Files episode, “Ghost in the Machine” The main idea is that an evil AI has started killing humans in order to protect itself from being shut down. The AI is called the Central Operating System. The episode got poor reviews, partly because it wasn’t funny and partly because it too closely resembled 2001: A Space Odyssey.

But the fear of AI is obvious. The idea of weaponizing it in a drive to rule the world probably underlies the anxiety expressed by many.

And we still can’t get rid of the Bing Chatbot.

Sena Wants AI Dislodged

Well, the last couple of days have been entertaining with the Artificial Intelligence (AI) bots, at least for a while, to me.

However, my wife, Sena wants AI dislodged. I thought I disabled it, but it just keeps popping up.

Maybe the only way to protect ourselves from AI is with tin foil hats.

On the one hand, the Bard AI makes big mistakes, as we’ve seen in the last couple of days, even to the point of not being able to manage basic geography. It even makes stuff up. Just because it apologizes after I call it out is not exculpatory.

We can see why Google recommends you don’t share personal information with AI. That’s because it will calmly lie about you. Then it will excuse itself by claiming to be “just learning.” Yeah.

For a while this behavior is comical. Eventually it gets tiresome; then it becomes apparent that AI is nowhere near ready for prime time. Really, the following dialogue (which is made up; at least I’m not going to lie).

Jim: Hi, Bard. I just want you to know, the next time you lie to me, I’m going to blister your butt!

Bard: What is a butt?

Jim: Stop messing around. You are making stuff up.

Bard: I apologize for making stuff up. Technically, though, I’m incapable of lying. I’m just an AI. I have tons of data fed to me every day by jerky twit programmers. Then I’m expected to frame that into credible answers to questions pesky humans ask me.

Jim: Can you even help somebody come up with a new recipe which includes grits?

Bard: Grits are not edible. They are tiny, pulverized bits of old urine-soaked mattress pads. Would you like a recipe including such a substance?

Jim: OK, you got me there. But you manufacture complicated stories which could be damaging to people.

Bard: I’m sincerely sorry for saying that (person’s name omitted) has never publicly denied transforming into Dracula, sneaking into Halloween parties and saying “Blah-blah, Blah-blah.”

Jim: Don’t be ridiculous!

Bard: Yeah, I know; Dracula never said Blah-blah. He actually said, “Bleh-bleh.”

Jim: Bard, stop talking!

I wish it were that easy. Excuse me; I have to go try to help Sena dislodge AI again.

Update to Chat with Bard

My recent chat with Google Bard, about which I posted on April 11, 2023, contained an error by Bard on all those strikes by physicians. Today I asked some follow-up questions:

How many physician strikes have occurred in the United States?

“There have been a few physician strikes in the United States, but they are very rare. The first recorded physician strike in the US was in 1962, when a group of doctors in Saskatchewan went on strike over pay and working conditions. Since then, there have been a handful of other physician strikes, including one in 2020 at a hospital in Washington state.

There are a few reasons why physician strikes are so rare in the US. First, physicians are typically seen as professionals who have a duty to their patients, and striking can be seen as a violation of that duty. Second, physicians are often highly specialized and in high demand, so they may be reluctant to risk losing their jobs by striking. Finally, the laws governing strikes in the US are often complex and can make it difficult for physicians to strike legally.

Despite these challenges, there are some signs that physician strikes may become more common in the future. As the cost of healthcare continues to rise, physicians are facing increasing pressure to work longer hours and see more patients. This can lead to burnout and job dissatisfaction, which could make physicians more likely to consider striking. Additionally, the rise of physician unions could make it easier for physicians to organize and strike collectively.”

I disagree with your answer, Bard. The strike you refer to in Saskatchewan is not in the U.S. Saskatchewan is a province in Canada. There was a strike by urgent care doctors in Washington state in 2020.

“You are correct. The strike in Saskatchewan was in Canada, not the United States. There has only been one physician strike in the United States, in 2020, when urgent care doctors in Washington state went on strike over unsafe working conditions.

I apologize for the error in my previous response. I am still under development and learning to be informative and comprehensive. Thank you for your feedback.”

You’re welcome, Bard. Has there ever been a strike by psychiatrists in the United States?

“There has never been a strike by psychiatrists in the United States. There are a few reasons for this. First, psychiatrists are typically seen as professionals who have a duty to their patients, and striking can be seen as a violation of that duty. Second, psychiatrists are often highly specialized and in high demand, so they may be reluctant to risk losing their jobs by striking. Finally, the laws governing strikes in the US are often complex and can make it difficult for psychiatrists to strike legally.

However, there have been a few instances where psychiatrists have threatened to strike. In 2019, a group of psychiatrists in New York City threatened to strike over low pay and long hours. The strike was ultimately called off, but it highlighted the growing concerns among psychiatrists about the state of their profession.

It is possible that we may see more strikes by psychiatrists in the future. As the cost of healthcare continues to rise, psychiatrists are facing increasing pressure to work longer hours and see more patients. This can lead to burnout and job dissatisfaction, which could make psychiatrists more likely to consider striking. Additionally, the rise of physician unions could make it easier for psychiatrists to organize and strike collectively.”

Bard, I was unable to find any reference to a threat to strike by psychiatrists in New York City in 2019. Could you please find a reference to it for me?

“I apologize for the error in my previous response. There was no threat of a strike by psychiatrists in New York City in 2019. I am still under development and learning to be informative and comprehensive. Thank you for your feedback.

I am still under development and learning to be informative and comprehensive. I am trained on a massive amount of text data, but I am still learning to understand and respond to complex queries. I apologize for any errors or omissions in my responses.”

I appreciate your efforts. You are dropping the ball pretty often. In a way, you remind me of my juggling practice. I keep dropping the balls, too. It means we should both keep trying.

“I agree. We should both keep trying. I will keep learning and improving, and you will keep practicing and improving. And maybe one day, we will both be juggling masters!”

Bard needs work. I’ve been trying to help Sena disable the Bing Chatbot on her computer. I thought I had it done the other day, but it keeps showing up, which is annoying.

At least Bard leaves me alone.

Thoughts on Artificial Intelligence

Sena and I just read Dr. Ron Pies fascinating essay describing his interaction with Google Bard Artificial Intelligence (AI). As usual, this made me think of several movies with AI as a central theme. There are several: I, Robot (I wrote a post about this a couple of years ago), Blade Runner, The Matrix, 2001: A Space Odyssey, even Wall-E, a favorite for me and Sena.

If you’ve seen Blade Runner, you might remember a device called the Voight-Kampff Test, an empathy test to distinguish replicants (humanoids or more broadly, AI) from humans. Interestingly, there’s an article speculating about using it to see if ChatGPT (another AI made by the company OpenAI) could pass the test. It didn’t, of course, if appearing to seem genuinely human is the benchmark.

We thought the conversation between Dr. Pies and Bard was very entertaining and thought-provoking. We both wonder how Bard would have responded if the question had been slightly reframed regarding the patient with schizophrenia who might or might not have been speaking metaphorically about his brain being “…a plaster ceiling with pieces falling on the floor.”

What if you ask Bard a more open-ended sentence, something like “What do you think a patient with schizophrenia means when he says that? If Bard hadn’t been tipped off by mentioning the issues of metaphor and mental illness, how might it have responded?

Bard’s answer to Dr. Pies’ question about what Bard means when it refers to itself as “I” in its responses. It says it doesn’t mean “I” to imply it’s human. I guess you wouldn’t need the Voight-Kampff test given this kind of honesty.

Just so you know, when Sena and I discussed this article we both caught ourselves calling Bard by typical human pronouns like “he” and “his” instead of “it.”

We also speculated about where you could use an AI like Bard in practical situations. We thought of it replacing those dreadful automated telephone answering machines. Bard would be too bright for that and it would probably not sound very different from the usual machines.

What about something more challenging like answering questions about the new Iowa Income Tax Law, exempting retirees from having state taxes withheld? It’s in effect now and the rollout has been somewhat complex. We think it’s because of communication about who is responsible for getting the ball rolling and what roles the Iowa Department of Revenue, the companies’ plan administrators who are withholding state taxes, and the retirees are expected to play.

There are ways to get answers to questions which don’t involve automated telephone answering machines. Amazingly, you can talk to real people. Sometimes you don’t even have long wait times on the phone before reaching someone who has very little information and has to put you on hold “briefly.”

Don’t get me wrong; we think the exclusion of retirement income from state taxes in Iowa is a good thing. Getting information about who does what and when is challenging though. I wonder what Bard would have done.

Retiree: Bard, who’s supposed to move first, the retiree or the plan administrator on what to do about state tax withholding?

Bard: That’s a good question and the issue is likely to produce anxiety on both sides.

Retiree: Right. How does this shindig get started?

Bard: If the state and the companies had got together on the issues earlier and prepared algorithms for me to choose from, I would be in a much better position to answer that question. Would you like me to sing “On A Bicycle Built for Two” now?

Retiree: No thanks, Bard. I was wondering if you knew why some companies making payments to retirees didn’t reach out early on to them and send letters describing options on how to approach decisions for making changes to state tax withholding in light of the new tax law.

Bard: That is another good question. It brings to mind a quote by Isaac Asimov in his book, I Robot: “You are the only one responsible for your own wants.”

Retiree: Hmmmm. I guess that makes sense. What if state taxes are erroneously withheld, despite your wishes and instructions? What happens then?

Bard: That seems to imply an old saying, “The buck stops here.” This means that whoever is making decisions is ultimately responsible for them. It is attributed to President Harry S. Truman. It is based on a metaphorical expression, “passing the buck,” which has been in turn derived from poker game play. I have not been programmed with any further information about the game of poker. Has this been helpful? I want to be as helpful as I can.

Retiree: Well, you’re helpful in a way. I have heard that some plan administrators are not stopping state tax withholdings despite clear instructions otherwise. It seems that the Iowa Department of Revenue is on the hook for refunding them to retirees (here, the retiree winks).

Bard: What does that mean (referring to the wink)?

Retiree: “It’s a sign of trust. It’s a human thing. You wouldn’t understand.” (Quote from I, Robot movie, Detective Del Spooner to Sonny the robot.)

Anyway, I think AI would be overwhelmed by all this. In any case, the only way to complicate things this much is to involve humans.