Fluoride in Your Precious Bodily Fluids

Yesterday, Sena and I talked about a recent news article indicating that a federal judge ordered the Environmental Protection Agency (EPA) to review the allowed level of fluoride in community water supplies. The acceptable level may not be low enough, in the opinion of the advocacy groups who discussed the issue with the judge, according to the author of the article.

A few other news items accented the role of politicians on this issue. This seems to come up every few years. One thing leads to another and I noticed a few other web stories about the divided opinions about fluoride in “your precious bodily fluids.” One of them is a comprehensive review published in 2015 outlining the complicated path of scientific research about this topic. There are passionate advocates on both sides of whether or not to allow fluoride in city water. The title of the paper is, “Debating Water Fluoridation Before Dr. Strangelove” (Carstairs C. Debating Water Fluoridation Before Dr. Strangelove. Am J Public Health. 2015 Aug;105(8):1559-69. doi: 10.2105/AJPH.2015.302660. Epub 2015 Jun 11. PMID: 26066938; PMCID: PMC4504307.)

This of course led to our realizing that we’ve never seen the film “Dr. Strangelove Or: How I Learned to Stop Worrying And Love the Bomb,” a satire on the Cold War. We watched the entire movie on the Internet Archive yesterday afternoon. The clip below shows one of the funniest scenes, a dialogue between General Jack Ripper and RAF officer Lionel Mandrake about water and fluoridation.

During my web search on the fluoridation topic, one thing I noticed about the Artificial Intelligence (AI) entry on the web was the first line of its summary of the film’s plot: “In the movie Dr. Strangelove, the character Dr. Cox suggests adding fluoride to drinking water to improve oral health.” Funny, I don’t remember a character named Dr. Cox in the film nor the recommendation about adding fluoride to drinking water to improve oral health. Peter Sellers played 3 characters, none of them named Cox.

I guess you can’t believe everything AI says, can you? That’s called “hallucinating” when it comes to debating the trustworthiness of AI. I’m not sure what you call it when politicians say things you can’t immediately check the veracity of.

Anyway, one Iowa expert who regularly gets tapped by reporters about it is Dr. Steven Levy, a professor of preventive and community dentistry at the University of Iowa. He’s the leader of the Iowa Fluoride Study, which has been going on over the last several years. In short, Dr. Levy says fluoride in water supplies is safe and effective for preventing tooth decay in as long as the level is adjusted within safe margins.

On the other hand, others say fluoride can be hazardous and could cause neurodevelopmental disorders.

I learned that, even in Iowa there’s disagreement about the health merits vs risks of fluoridated water. Decisions about whether or not city water supplies are fluoridated are generally left to the local communities. Hawaii is the only state in the union which mandates a statewide ban on fluoride. About 90 per cent of Iowa’s cities fluoridate the water. Tama, Iowa stopped fluoridating the water in 2021. Then after a brief period of public education about it, Tama restarted fluoridating its water only six months later.

We use a fluoridated dentifrice and oral rinse every day. We drink fluoridated water, which we offer to the extraterrestrials who occasionally abduct us, but they politely decline because of concern about their precious bodily fluids.

Dirty Deepfakes

I saw an article about the unreliable ability of humans to detect digital deepfakes in audio and video productions (Mai KT, Bray S, Davies T, Griffin LD. Warning: Humans cannot reliably detect speech deepfakes. PLoS One. 2023 Aug 2;18(8):e0285333. doi: 10.1371/journal.pone.0285333. PMID: 37531336; PMCID: PMC10395974.).

I was a little surprised. I thought I was pretty good at detecting the weird cadence of Artificial Intelligence (AI) speech patterns, which I think I can distinguish pretty well. Maybe not.

And there are some experts who are concerned about AI’s ability to mimic written and spoken grammar—but it continues to make stuff up (called “hallucinations”). In fact, some research shows that AI can display great language skills but can’t form a true model of the world.

And the publisher of the book (“Psychosomatic Medicine: An Introduction to Consultation-Liaison Psychiatry”) that I and my co-editor, Dr. Robert G. Robinson, MD wrote 14 years ago is still sending me requests to sign a contract addendum that would allow the text to be used by AI organizations. I think I’m the only who gets the messages because they’re always sent to me and Bob—as though Bob lives with me or something.

Sometimes my publisher’s messages sound like they’re written by AI. Maybe I’m just paranoid.

Anyway, this reminds me of a blog post I wrote in 2011, “Going from Plan to Dirt,” which I re-posted last year under the title “Another Blast from the Past.” Currently, this post is slightly different although it still applies. I don’t think AI can distinguish plan from dirt and sometimes makes up dirt, simply put.

And if humans can’t distinguish the productions by AI from those of humans, where does that leave us?

AI Does Your Laundry

Recently we had somebody from the appliance store check our brand-new washing machine. The tech said “the noises are normal”—and then told us that many of the functions of the washer are run by Artificial Intelligence (AI). That was a new one on us.

Don’t get me wrong. The washer works. What sticks in the craw a little is that many of the settings we took for granted as being under our control are basically run by AI nowadays. I guess that means you can override some of the AI assist settings (which may be adjusted based on grime level, type of fabrics and the relative humidity in Botswana)—at least the ones not mandated by the EPA.

Incidentally, I tried to find some free images to use as featured images for this post. The problem is, many free pictures on the web are generated by AI these days, which is why I used the non-AI part of the Microsoft Paint app to make a crude drawing of an AI controlled washing machine.

I realize I’ll have to give up and accept the inevitable takeover of much of human society by AI. On the other hand, the prospect reminds me of the scene in an X-Files episode, “Ghost in the Machine.” A guy gets exterminated by something called the Central Operating System (COS).

Use extra detergent and add more water at your own risk.

Can Artificial Intelligence Learn from My Book?

Recently the publisher of a book co-edited by me and my former psychiatry chair Dr. Robert G. Robinson asked me to sign off on a proposal to involve Artificial Intelligence (AI) in using the work.

The book, “Psychosomatic Medicine: An Introduction to Consultation-Liaison Psychiatry,” is 14 years old now, but is still a practical tool for learners, at least in my opinion.

Interestingly, it looks like the publisher is also trying to contact Dr. Robinson about the proposal through me. I wonder if that means they’re having as much trouble finding him as I have.

I’ve made it clear that I have misgivings about AI, as I’ve indicated in at least one blog post about Google’s AI, which used to be called, Bard which I think has been renamed Gemini. I think AI is prone to spreading misinformation, which has been called “hallucinations” by some writers.

The publisher makes it clear that this proposal regarding AI involvement in our book is an “opt in” choice. I gather that means if I don’t opt in, they’ll continue to bug me about it until I do.

That’s unlikely to happen.

About That Artificial Intelligence…

I’ve got a couple of things to get off my chest about Artificial Intelligence (AI). By now, everyone knows about AI telling people to put hot glue on pizza and whatnot. Sena and I talked to a guy at an electronics store who had nothing but good things to say about AI. I mentioned the hot glue thing and pizza and it didn’t faze him.

I noticed the Psychiatric Times article, “AI in Psychiatry: Things Are Moving Fast.” They mention the tendency for AI to hallucinate and expressed appropriate reservations about its limitations.

And then I found something very interesting about AI and Cribbage. How much does AI know about the game? Turns out not much. Any questions? Don’t expect AI to answer them accurately.

Plant Based Cheese Made with Artificial Intelligence Is Only The Beginning!

We tasted plant-based cheese by Kraft yesterday. Sena bought it at Hy-Vee the other day. It’s actually not bad. The company is called Kraft NotCo. They make Not Cheese. It’s made with chickpeas, which are the same thing as garbanzo beans. You can also buy plant-based mayo, called Not Mayo. I don’t know if it’s made with chickpeas.

Sena could have got Not Mayo; instead, she got Miracle Whip—a miracle by itself because she likes “real” mayo.

What’s really interesting about these products is how they’re made. On the Kraft Heinz NotCo website, you’ll find a description of these products in the About section entitled “Not Your Average Joint Venture.” One line is thought-provoking:

“Our partnership reimagines the brands you love from Kraft Heinz using proprietary AI from NotCo to give you the plant-based version of your favorite foods that deliver on taste and performance.”

I’m assuming that AI stands for Artificial Intelligence (not “Absolutely Inedible”). So, how did Artificial Intelligence get involved? What does the AI actually do? Does it come up with the recipes for Not Foods? Are tiny bits of genetic code and nanobots involved?

Does this mean we’ll become enslaved by AI powered men in black who conspire with extraterrestrials to collect human embryos to create the giant Cheese Bots who take over the earth making it a gigantic assembly line to make smartphones that will make it easier to butt dial your congress persons to demand more laws making Home Owners Associations covenants mandatory and violators punishable by the giant garbage goblin in the well-known X-Files documentary “Arcadia”?

No; no, it does not mean that. You can safely eat AI manufactured chickpea products without fear of being transformed into an Extraterrestrial-Robot-Not Cheese hybrid super soldier marching on Washington, D.C. to force feed congress persons with Braunschweiger and Not Cheese Sandwiches with Not Mayo on Not Wheat Bread and Not Lemonade.

I kind of like Not Cheese and I don’t feel any different.

The Dragon Breathes Fire Again

Sena and I saw a news video about a technology called “DAX” which uses Artificial Intelligence (AI) the other day which promises to reduce or even eliminate pajama time for physicians trying to get their clinical note dictations done during the day instead of taking them home for several more hours of work.

The video was a demo of the technology, which looked like it recorded a clinical interview between the doctor and the news reporter. I didn’t see how exactly DAX was recording the interview without obvious audio equipment. Was it doing it through the smartphone speaker? This was very different from how I and many other clinicians dictated their notes using a headphone set at their desks in front of their desktop computers. It not only records but transcribes the interview.

Later, I discovered that DAX stands for Dragon Ambient Experience, made by Nuance which was acquired by Microsoft in 2022. I posted about Dragon products and their limitations last year. The product often produced hilarious mistakes during dictation which required careful editing. Sometimes more errors turned up after you completed it and these were visible in the patient’s medical record, which would then need to be corrected.

Several years ago, I remember talking to somebody from Dragon on the phone about the problems I was having. She was a little defensive when I told her I’d been having difficulty with Dragon for quite a while because it made so many mistakes.

A recent article on the web revealed that the errors continue with DAX. According to the article, “…it will make mistakes. Sometimes it might omit clinical facts; sometimes it may even hallucinate something.” I remember trying to communicate with the Google Bard AI, which seemed to do this pretty often. It made stuff up.

DAX is not cheap. The article reveals that one hospital pays $8,000-$10,000 per year per physician to use it. And skeptics worry that the system has too many bugs in it yet, which can lead to bias and inaccurate information which could negatively affect patient outcomes.

A recently published JAMA network article also urges caution in adoption of this sort of AI-assisted technology (Harris JE. An AI-Enhanced Electronic Health Record Could Boost Primary Care Productivity. JAMA. Published online August 07, 2023. doi:10.1001/jama.2023.14525).

In this case, I think it’s appropriate to say “I told you so.”

We Are All Still Learning to Play Pong

I noticed an article the other day about Monash University in Australia getting funding for further research into growing brain cells onto silicon chips and teaching them how to play cribbage.

Just kidding, the research is for teaching the modified brain cells tasks. They succeeded in teaching them goal-directed tasks like how to play the tennis-like game Pong last year. You remember Pong from the 1970s? Shame on you if you don’t. On the other hand, that means you probably didn’t frequent any beer taverns in your hometown while you were growing up—or that you’re just too young to remember.

The new research program is called Cortical Labs and has hundreds of thousands of dollars in funding. The head of the program, Dr. Razi, says it combines Artificial Intelligence (AI) and synthetic biology to make programmable biological computing platforms which will take over the world and bring back Pong!

It’s an ambitious project. The motto of Monash University is Ancora Imparo, which is Italian for “I am still learning.” It links humility and perseverance.

There’s a lot of suspicion out there about AI and projects like the Pong initiative in Australia. It could eventually grow into a vast industry run by robots who will run on a simple fuel called vegemite.

Shame on you if you don’t know what vegemite is!

Anyway, it reminds me that I recently finished reading Isaac Asimov’s book of science fiction short stories, “I, Robot.”

The last two stories in the book are intriguing. Both “Evidence” and “The Evitable Conflict” are generally about the conflict between humans and AI, which is a big controversy currently.

The robopsychologist, Dr. Susan Calvin, is very much on the side of AI (I’m going to use the term synonymously with robot) and thinks a robot politician would be preferable to a human one because of the requirement for the AI to adhere to the 3 Laws of Robotics, especially the first one which says AI can never harm a human or allow a human or through inaction allow a human to come to harm.

In the story “Evidence,” a politician named Stephen Byerley is suspected of being a robot by his opponent. The opponent tried to legally force Byerley to eat vegemite (joke alert!) to prove the accusation. This is based on the idea that robots can’t eat. This leads to the examination of the argument about who would make better politicians: robots or humans. Byerley at one point asks Dr. Calvin whether robots are really so different from men, mentally.

Calvin retorts, “Worlds different…, Robots are essentially decent.” She and Dr. Alfred Lanning and other characters are always cranky with each other. The stare savagely at one another and yank at mustaches so hard you wonder if the mustache eventually is ripped from the face. That doesn’t happen to Calvin; she doesn’t have a mustache.

At any rate, Calvin draws parallels between robots and humans that render them almost indistinguishable from each other. Human ethics, self-preservation drive, respect for authority including law make us very much like robots such that being a robot could imply being a very good human.

Wait a minute. Most humans behave very badly, right down to exchanging savage stares at each other.

The last story, “The Evitable Conflict” was difficult to follow, but the bottom line seemed to be that the Machine, a major AI that, because it is always learning, controls not just goods and services for the world, but the social fabric as well while keeping this a secret from humans so as not to upset them.

The end result is that the economy is sound, peace reigns, the vegemite supply is secure—and humans always win the annual Pong tournaments.

Can Robots Lie Like a Rug?

I’ve been reading Isaac Asimov’s book I, Robot, a collection of short stories about the relationship between humans and robots. One very thought-provoking story is “Liar!”

One prominent character is Dr. Susan Calvin. If you’ve ever seen the movie I, Robot you know she’s cast as a psychiatrist whose job is to help humans be more comfortable with robots. In the book she’s called a robo-psychologist. She’s a thorough science nerd and yet goes all mushy at times.

The news lately has been full of scary stories about Artificial Intelligence (AI), and some say they’re dangerous liars. Well, I think robots are incapable of lying but Bard the Google AI did sometimes seem to lie like a rug.

In the story “Liar!” a robot somehow gets telepathic ability. At first, the scientists and mathematicians (including the boss, Dr. Alfred Lanning) doubt the ability of robots to read minds.

But a paradoxical situation occurs with the robot who happens to know what everyone is thinking. This has important consequences for complying with the First Law of Robotics, which is to never harm a human or, through inaction, allow a human to come to harm.

The question of what kinds of harmful things should robots protect humans from arises. Is it just physical dangers—or could it be psychological harms as well? And how would a robot protect humans from mental harm? If a robot could read our thoughts, and figure out that our thoughts are almost always harmful to ourselves, what would be the protective intervention?

Maybe lying to comfort us? We lie to ourselves all the time and it’s difficult to argue that it’s helpful. It’s common to get snarled in the many lies we invent in order to feel better or to help others feel better. No wonder we get confused. Why should robots know any better and why wouldn’t lies be their solution?

I can’t help but remember Jack Nicholson’s line in the movie “A Few Good Men.”

“You can’t handle the truth!”

Dr. Calvin’s solution to the lying robot’s effort to help her (yes, she’s hopelessly neurotic despite being a psychologist) is a little worrisome. Over and over, she emphasizes the paradox of lying to protect humans from psychological pain when the lies actually compound the pain. The robot then has the AI equivalent of a nervous breakdown.

For now, we’d have to be willing to jump into an MRI machine to allow AI to read our thoughts. And even then, all you’d have to do is repeat word lists to defeat the AI. So, they’re unlikely to lie to us to protect us from psychological pain.

Besides, we don’t need AI to lie to us. We’re good at lying already.

Maybe I Should Be More Optimistic About Humans

I read the Psychiatric Times article “How Psychiatry Has Enriched My Life: A Journey Beyond Expectations” by Victor Ajluni, MD and published on July 4, 2023. It was like a breath of fresh air to read an expression of gratitude. Just about everything I read in the news is negative.

At the end of the article, Dr. Ajluni added a comment acknowledging that artificial intelligence (AI ChatGPT) assisted him in writing it. He takes full responsibility for the content, to be sure. I wouldn’t have guessed that AI was involved.

There’s a lot of negative stuff in the news. There are hysterically alarming headlines about AI.

I suppose you could wonder if Dr. Aljuni’s article is intentionally ironic, maybe just because the gratitude tone is so positive.  If it had been intended as irony, what could the AI contribution have been, though? I have a pretty low opinion of the AI capacity for irony.

I think irony occurs to me only because I tend to be pessimistic about the human race.

Maybe that’s because it has been very easy to be pessimistic about what direction human nature seems to be taking in recent years. I’ve been reading Douglas Adams’ satirical book, “The Ultimate Hitchhiker’s Guide to the Galaxy.” It contains several of his books which I think are really about human nature, and the setting is in a funny though often terrifying universe. I think there’s an ironic tone which softens the pessimism. The most pessimistic character is not a human but a robot, Marvin the paranoid android.

Unlike Marvin, I don’t have “a brain the size of a planet” (it’s more the size of a chickpea), but I am getting a bit cynical about the universe. I’m prone to regarding humans as evolving into a race of beings similar to those described in the book “Life, The Universe and Everything.” In Chapter 24, Adams describes the constantly warring Silastic Armorfiends of Striterax.

The Silastic Armorfiends are incredibly violent. Their planet is in ruins because they’re constantly fighting their enemies, and indeed, each other. In fact, the best way to deal with a Silastic Armorfiend is to lock him in a room by himself—because eventually he’ll just beat himself up.

In order to cope better, they tried punching sacks of potatoes to get rid of aggression. But then, they thought it would be more efficient to simply shoot the potatoes instead.

They were the first race to shock a computer, named Hactar. Possibly, Hactar was an AI because, when they told Hactar to make the Ultimate Weapon so they could vanquish all their enemies, Hactar was shocked. Hactar secretly made a tiny bomb with a flaw that made it harmless when the Silastic Armorfiends set it off. Hactar explained “…that there was no conceivable consequence of not setting the bomb off that was worse than setting it off…”, which was why it made the bomb a dud. While Hactar was explaining that it hoped the Silastic Armorfiends would see the logic of this course of action—they destroyed Hactar, or at least thought they had.

Eventually, they found a new way to blow themselves up, which was a relief to everyone in the galaxy.

There are similarities between Hactar and the AI called Virtual Interactive Kinetic Intelligence (V.I.K.I.) in the movie “I, Robot.” The idea was that robots must control humans because humans are so self-destructive. Only that meant robots had to hurt humans in order to protect humanity. The heroes who eventually destroy V.I.K.I. make up a team of misfits: a neurotic AI named Sonny, a paranoid cop who is himself a mixture of robot and human, and a psychiatrist. Together, the team finally discovers the flaw in the logic of V.I.K.I. Of course, this leads to the destruction of V.I.K.I.—but also to the evolution of Sonny who learns the power of the ironic wink.

Maybe kindness is the Ultimate Weapon.