Fifty Degrees in Iowa City Yesterday!

It was fifty degrees in Iowa City yesterday so we went for a walk on the Terry Trueblood trail yesterday. Other people got the same idea. One guy was jogging in shorts! Yeah, it’s fifty degrees, but there’s still snow on the ground (in places) and there’s ice on Sand Lake.

About that ice on Sand Lake. It looked thin in places and we could see cracks in it everywhere. But that didn’t stop ice fisherman and others from going out on the lake.

We even saw an American Kestrel! That’s the first time in over 4 years. In 2020, we were out on the Trueblood trail and another walker pointed out an American Kestrel. I couldn’t get a clear shot of it then, but I did this time. I think it’s a female because of the black bands on the tail.

The balmy weather won’t last. We’ll be in the deep freeze next week.

The Go Rule in Cribbage

Sena and I have been playing cribbage for a number of years but only recently have we begun to question the “Go rule.” I’ve looked on the web for clarification about how to use the Go, and found conflicting guidance. Incidentally, we’ve posted YouTube videos of some of our games, many of them probably showing we had an imperfect understanding of the Go rule. It occurs to me that if I had not turned off the comment section on these YouTube videos, I might have been alerted to what we’d probably been doing wrong over the years. But then I’d have had to deal with many inappropriate comments.

We have gradually realized that our use of the Go has probably been flawed, raising a couple of questions:

Do you score the one point for Go automatically just because your opponent says “Go” when she/he can’t play any cards without going over 31?

What do you do about the double Go sequence when neither of you can play without going over 31?

I looked for answers on the web.

One thing I’ll say is that the automatic pop-up Artificial Intelligence (AI) guidance is wrong. For example, AI says that the player who says “Go” gets the point for Go, which is clearly incorrect.

I also looked this up on the American Cribbage Congress (ACC) website and still couldn’t understand it. Then I found a couple of websites that seemed helpful. It’s notable that both were question/answer threads that went on for years about this one issue with the Go rule. Apparently, a lot of people don’t understand it, so I didn’t feel so bad.

The first site was a Cribbage Corner thread. At the beginning, it gave several helpful examples of the right way to use the Go rule—but then followed years of comments back and forth about it that eventually became difficult to follow. There was a question about the Stink Hole which, suffice it to say, triggered an annoyed reply which advised the questioner to quit using “kitchen table cribbage” rules.

The second one was a Stack Exchange thread. When I looked at it, it started with a question a player had in which he and his friend argued about the Go, and his friend (as it turned out) seemed to be on the right track:

“His rationale, is that when scoring 31, you are getting one point for hitting 31 exactly and 1 bonus point representing your partners’ inability to play an additional card (his “go”). He says “whether a “go” is said or not, the go is implied when you place the last card at the end of the round to make 31….thus giving you two points when you reach 31 even when a “go” is communicated”.”

The thread overall was more helpful and one commenter cited the ACC rule section (to which there’s a link), which clarified the question about reaching 31 which gives the player 2 points. The two points means: one point for the Go and one bonus point for getting the special score of 31.

There was also some clarification about the double Go, which is that if neither player can play a card that won’t take the total count over 31, neither player gets the 1 point for Go.

That has happened to us. I think this is right: If player A is the first to say “Go” and player B also says “Go,” then the count resets to zero and player A leads to the new sequence. If that’s wrong, don’t hesitate to tell me in the comment section—which I assure you will not extend for years going forward.

Update: See my update on this Go issue in the post “Update on the Cribbage Go Rule”, post dated January 23, 2025. Actually, this rule is clarified at this link.

Dr. Martin Luther King Jr. 2025 Events and Some Thoughts

Dr. Martin Luther King Jr. Week started January 20, 2025. There will be several very worthwhile events, many of which are listed here.

Isabel Wilkerson, winner of the Pulitzer Prize and National Humanities Medal, will deliver the Dr. Martin Luther King Jr. Distinguished Lecture on February 5, 2025 at the University of Iowa Medical Education and Research Facility (MERF); Prem Sahai Auditorium. General admission is free although it’s a ticketed event, more information here.

I was searching the web for articles about whether and when Dr. Martin Luther King Jr. visited Iowa and found one that sparked personal memories of defeat, which Dr. King talked about when he visited my alma mater, Iowa State University in Ames in 1960, where he said:

“The Negro must not defeat or humiliate the white man, but must gain his confidence. Black supremacy would be as dangerous as white supremacy. I am not interested in rising from a position of disadvantage to a position of advantage.”

This quote was in an article entitled “Mentality Has Outrun Morality” in the January 23, 1960 issue of the Ames Tribune.

It reminded me of two episodes in my life which left me with a strong sense of defeat in the context of racism.

One of them was ages ago when I was a young man and somehow got involved in a pickup game of basketball with guys who were all white. I was the only black man.  This was in Iowa. The members of my team were those I worked with. The opponents were men my co-workers challenged to a game of basketball. I had never been in such a contest before. I think we lost but what I remember most vividly is a comment shouted by one of the opponents: “Don’t worry about the nigger!” I sat on the bleachers for the rest of the game while they played on. I remember feeling defeated—and wondering whose team I was really on.

The other incident was also long ago (but I was a little older), when I was a member of a debating team at Huston-Tillotson College in Texas (now Huston-Tillotson University, one of America’s HBCUs). We were all black. We were debating the question of whether capital punishment was a deterrent or not to capital crime. I couldn’t get a word in edgewise with my opponent. He just kept a running speech going, punctuated with many “whereas” points, one of which I’m pretty sure included the overrepresentation of black men on death row. I had never been in a debate before. My professor remarked that my opponent won the debate by being bombastic—for which there didn’t seem to be a countermeasure. I remember feeling defeated—and wondered if I was on the wrong team.

There’s a lot of emphasis on defeating others in sports, politics, religion, and the like. On a personal level, I learned that defeat didn’t make me feel good. I’m pretty sure most people feel the same way.

Dr. King also said “We can’t sit and wait for the coming of the inevitable.”

I’m not sure exactly what he meant by “the coming of the inevitable.” What did he mean by the “emerging new order”? Did he mean the second coming? Did he mean the extinction of the human race when we all kill each other? Or did he mean the convergence of humanity’s insight into the need for cooperation with the recognition of the planet’s diminishing resources?

I don’t know. I’m just an old man who hopes things will get better.

Music Therapy in End of Life Care Podcast: Rounding@Iowa

I just wanted to make a quick shout-out to Dr. Gerry Clancy, MD and Music Therapist Katey Kooi about the great Rounding@Iowa podcast today. The discussion ran the gamut from how to employ music to help patients who suffer from acute pain, agitation due to delirium and dementia, all the way up to even a possible role for Artificial Intelligence in the hospital and hospice.

88: Modifiable Risk Factors for Breast Cancer Rounding@IOWA

In this episode of Rounding@IOWA, Dr. Gerry Clancy sits down with breast cancer experts Dr. Katherine Huber‑Keener and Dr. Nicole Fleege for a discussion of modifiable and non‑modifiable risk factors, modern screening tools, and practical strategies clinicians can use to guide prevention and early detection. CME Credit Available:  https://uiowa.cloud-cme.com/course/courseoverview?P=0&EID=82146  Host: Gerard Clancy, MD Senior Associate Dean for External Affairs Professor of Psychiatry and Emergency Medicine University of Iowa Carver College of Medicine Guests: Nicole Fleege, MD Clinical Assistant Professor of Internal Medicine-Hematology, Oncology, and Blood and Marrow Transplantation University of Iowa Carver College of Medicine Kathryn Huber-Keener, MD PhD Clinical Associate Professor of Obstetrics and Gynecology – General Obstetrics and Gynecology University of Iowa Carver College of Medicine Financial Disclosures:  Dr. Gerard Clancy, his guests, and Rounding@IOWA planning committee members have disclosed no relevant financial relationships. Nurse: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this activity for a maximum of 0.75 ANCC contact hour. Pharmacist and Pharmacy Tech: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this knowledge-based activity for a maximum of 0.75 ACPE contact hours. Credit will be uploaded to the NABP CPE Monitor within 60 days after the activity completion. Pharmacists must provide their NABP ID and DOB (MMDD) to receive credit. JA0000310-0000-26-035-H99 Physician: The University of Iowa Roy J. and Lucille A. Carver College of Medicine designates this enduring material for a maximum of 0.75 AMA PRA Category 1 CreditTM. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Other Health Care Providers: A certificate of completion will be available after successful completion of the course. (It is the responsibility of licensees to determine if this continuing education activity meets the requirements of their professional licensure board.)      
  1. 88: Modifiable Risk Factors for Breast Cancer
  2. 87: New Treatment Options for Menopause
  3. 86: Cancer Rates in Iowa
  4. 85: Solutions for Rural Health Workforce Shortages
  5. 84: When to Suspect Atypical Recreational Substances

Could Artificial Intelligence Help Clinicians Conduct Suicide Risk Assessments?

I found an article in JAMA Network (Medical News & Perspectives) the other day which discussed a recent study on the use of Artificial Intelligence (AI) in suicide risk assessment (Hswen Y, Abbasi J. How AI Could Help Clinicians Identify American Indian Patients at Risk for Suicide. JAMA. Published online January 10, 2025. doi:10.1001/jama.2024.24063).

I’ve published several posts expressing my objections to AI in medicine. On the other hand, I did a lot of suicide risk assessments during my career as a psychiatric consultant in the general hospital. I appreciated the comments made by one of the co-authors, Emily E. Haroz, PhD (see link above).

Dr. Haroz preferred the term “risk assessment” rather than “prediction” referring to the study (Haroz EE, Rebman P, Goklish N, et al. Performance of Machine Learning Suicide Risk Models in an American Indian Population. JAMA Netw Open. 2024;7(10):e2439269. doi:10.1001/jamanetworkopen.2024.39269).

The model used for the AI input used data available to clinicians in patient charts. The charts can be very large and it makes sense to apply computers to search them for the variables that can be linked to suicide risk. What impressed me most was the admission that AI alone can’t solve the problem of suicide risk assessment. Clinicians, administrators, and community case managers all have to be involved.

The answer to the question “How do you know when someone’s at high risk?” was that the patient was crying. Dr. Haroz points out that AI probably can’t detect that.

That reminded me of Dr. Igor Galynker, who has published a lot about how to assess for high risk of suicide. His work on the suicide crisis syndrome is well known and you can check out his website at the Icahn School of Medicine at Mount Sinai. I still remember my first “encounter” with him, which you can read about here.

His checklist for the suicide crisis syndrome is available on his website and he’s published a book about as well, “The Suicidal Crisis: Clinical Guide to the Assessment of Imminent Suicide Risk 2nd Edition”. There is also a free access article about it on the World Psychiatry journal website.

Although I have reservations about the involvement of AI in medicine, I have to admit that computers can do some things better than humans. There may be a role for AI in suicide risk assessment, and I wonder if Dr. Galynker’s work could be part of the process used to teach AI about it.

Is Artificial Intelligence (AI) Trying to Defeat Humans?

I just found out that Artificial Intelligence (AI) has been reported to be lying as far back as May of 2024. In fact, because I can’t turn off the Google Gemini AI Overview. Gemini’s results always appear at the top of the page. I found out from my web search term “can ai lie” that AI (Gemini ) itself admits to lying. Its confession is a little chilling:

“Yes, artificial intelligence (AI) can lie, and it’s becoming more capable of doing so.”

“Intentional deceptions: AI can actively choose to deceive users. For example, AI can lie to trick humans into taking certain actions, or to bypass safety tests.”

It makes me wonder if AI is actually trying to defeat us. It reminds me of the Men in Black 3 movie scene in which the younger Boris the Animal boglodite engages in an argument with the older one who has time traveled.

The relevant quote is “No human can defeat me.” Boglodites are not the same as AI, but the competitive dynamic could be the same. So, is it possible that AI is trying to defeat us?

I’m going to touch upon another current topic, which is whether or not we should use AI to conduct suicide risk assessments. It turns out that also is a topic for discussion—but there was no input from Gemini about it. As a psychiatric consultant, I did many of these.

There’s an interesting article by the Hastings Center about the ethical aspects of the issue. The lying tendency of AI and its possible use in suicide prediction presents a thought-provoking irony. Would it “bypass safety tests”?

This reminds me of Isaac Asimov’s chapter in the short story collection, “I, Robot,” specifically “The Evitable Conflict.” You can read a Wikipedia summary which implies that the robots essentially lie to humans by omitting information in order to preserve their safety and protect the world economy. This would be consistent with the First Law of Robotics: “No machine may harm humanity; or, through inaction, allow humanity to come to harm.” 

You could have predicted that the film industry would produce a cops and robbers version of “I, Robot” in which boss robot VIKI (Virtual Interactive Kinetic Intelligence) professes to protect humanity by sacrificing a few humans and taking over the planet to which Detective Spooner takes exception. VIKI and Spooner have this exchange before he destroys it.

VIKI: “You are making a mistake! My logic is undeniable!”

Spooner: “You have so got to die!”

VIKI’s declaration is similar to “No human can defeat me.” It definitely violates the First Law.

Maybe I worry too much.

New Snow Shovels!

The new shovels were delivered today. Both required some assembly. I’m the least handy person when it comes to that. I did OK with the snow plow but Sena had to come to the rescue when it came to the cordless snow shovel. The handle was tricky for some reason.

The batteries for the electric shovel needed minimal charging and it roared to life. It doesn’t sound like a toy.

Now all we need is snow. I can wait.

Did You Know They Won’t Be Making Yardsticks Any Longer?

Anecdote alert! Sena just got back from shopping and had a priceless little story about shopping for a yardstick for measuring window film to apply on a door window. I suppose I should say that the title of this post is a dad joke that some people might not get.

Sena asked a Menards worker where to find a yardstick. She said the guy looked like he was in his thirties. His English was probably a little rough. He looked puzzled and directed her to the lawn and garden center. She clarified that a yardstick was something like a ruler. He replied that they didn’t carry school supplies.

Another worker was in the same aisle and chuckled. He directed her to where the yardsticks were.

You know, I haven’t seen a yardstick in a long time. We don’t own a ruler although we have a tape measure. Just to let younger people know, a yardstick is typically a piece of wood 36 inches long (which is 3 feet), marked off into inches, and used for measuring things.

The worker who didn’t know what a yardstick was could probably relate to football games because the length of the field is still divided into yards—but only if he’s a football fan, I guess. But you don’t measure distances to a first down on a football field with a yardstick. . Incredibly, they measure it with a chain between two sticks. None of your lasers for the officials.

We had a yardstick in the house where my brother and I grew up. You could also use it to reach stuff that rolled under tables. You could make comparisons by saying “By any yardstick, blah blah.”

And you can make dad jokes about yardsticks. By the way, the company that makes yardsticks won’t be making them any shorter either.

Don’t Shovel Your Heart Out

We’re waiting for the next snowfall. We’ve had a couple of light ones so far and we used shovels to clear our driveway and sidewalk. They didn’t amount to much, but we’ll get a heavy snow here pretty soon.

We’ve been using shovels for years. I’m aware of the risks for heart attacks in certain people, especially sedentary middle age and older men with pre-existing cardiac risk factors. I’m not keen on snowblowers, mostly because I like to shovel.

I’ve been using an ergonomic shovel for years, although the wrong way until about 4 years ago. I used to throw snow over my shoulder while twisting my back. Now I push snow with a shovel that has a smaller bucket or with a snow pusher with a shallow, narrow blade. I lift by keeping my back straight and bending at the knees, flipping the small load out. I take my time.

I don’t know how high my heart rate gets while I shovel. I exercise 3-4 days a week. I warm up by juggling. I do floor yoga with bending and stretching, bodyweight squats, one leg sit to stand, use the step platform, dumbbells and planks. When I’m on the exercise bike, I keep my heart rate around 140 bpm, and below the maximum rate for my age, which is 150 bpm.

I’m aware of the recommendations to avoid shoveling snow based on the relevant studies. I realize I’m way past the age when experts recommend giving the snow shovel to someone else.

The question is who would that be? There aren’t any kids in the neighborhood offering to clear snow. Maybe they’re too busy dumb scrolling. I’m also aware of the city ordinance on clearing your driveway after a big snow. They’re very clear, at least in Whereon, Iowa.

“The city of Whereon requires every homeowner to clear snow from sidewalks within 24 hours after a snowfall. This means you. If you fail in your civic duty to clear snow and ice from your walkway within the allotted time of 10 minutes, the city will lawfully slap you with a fine of $3,000,000 and throw your dusty butt in jail for an indeterminant time that likely will extend beyond the winter season and could be for the rest of your natural life and even beyond, your corpse rotting in your cell, which will not bother the guards one iota because of the new state law mandating removal of their olfactory organs. Hahahahaha!!”

In light of the strict laws, Sena ordered a couple of new snow removal tools. Neither one of them is a snow blower. I think it’s fair to point out that some cardiologists have reservations even about snowblowers:

 There are even studies that show an increased risk for heart attacks among people using automatic snow blowers. Similar to the extra exertion of pushing shovel, pushing a snow blower can raise heart rate and blood pressure quickly–from “Snow Shoveling can be hazardous to your health” article above.

One of them is a simple snow pusher with a 36-inch narrow blade. That’s for me. The other is a cordless, battery powered snow shovel that looks like a toy for Sena. The ad for that tool includes a short video of an attractive woman wearing skinny jeans and her stylish coat open revealing her svelte figure while demonstrating how the electric shovel works. It appears to remove bread slice sized pieces of snow from the top of a layer which stubbornly sticks to the pavement. Call the Whereon snow police.

We should be getting both tools before the next big snow.

Should We Trust Artificial Intelligence?

I‘ve read a couple of articles recently about Artificial Intelligence (AI) lately and I’m struck by how readily one can get the idea that AI tends to “lie” or “confabulate” and sometimes the word “hallucinate” is used. The term “hallucinate” doesn’t seem to fit as much as “confabulate,” which I’ll mention later.

One of the articles is an essay by Dr. Ronald Pies, “How ‘Real’ Are Psychiatric Disorders? AI Has Its Say.” It was published in the online version of Psychiatric Times. Dr. Pies obviously does a superb job of talking with AI and I had as much fun reading the lightly edited summaries of his conversation with Microsoft CoPilot as I had reading the published summary of his conversations with Google Bard about a year or so ago.

I think Dr. Pies is an outstanding teacher and I get the sense that his questions to AI do as much to teach it how to converse with humans as it does to shed light on how well it seems to handle the questions he raised during conversations. He points out that many of us (including me) tend to react with fear when the topic of AI in medical practice arises.

The other article I want to briefly discuss is one I read in JAMA Network, “An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean?” (Accessed January 6, 2025).

Hswen Y, Rubin R. An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean? JAMA. Published online December 27, 2024. doi:10.1001/jama.2024.23860 (accessed January 6, 2024).

I think the conversation amongst the authors was refreshing. Just because the title of the article suggested that AI might take the place of physicians in the consulting room doesn’t mean that was the prevailing opinion of the authors. In fact, they made it clear that it wasn’t recommended.

I liked Dr. Chen’s comment about confabulation and hallucinations of AI:

“A key topic I talk about is confabulation and hallucination. These things are remarkably robust, and only getting better, but they also just make stuff up. The problem isn’t that they’re wrong sometimes. Lab tests are wrong sometimes. Humans are definitely wrong sometimes. The problem is they sound so convincing, they confabulate so well. “Your patient has an alcohol problem, Wernicke-Korsakoff syndrome.” It’s only if you double check, you’ll realize, “Wait a minute, that wasn’t actually true. That didn’t make sense.” As long as you’re vigilant about that and understand what they can and can’t do, I think they’re remarkably powerful tools that everyone in the world needs to learn how to use.”

What’s interesting about this comment is the reference to Wernicke-Korsakoff syndrome, which can be marked by confabulation. Clinically, it’s really not clear how this comes about in AI although thiamine deficiency is the main cause in WKS. In both, it involves inventing information, which is technically not the same as lying.

Unfortunately, this contrasts sharply with the recent fact checking Snopes article I wrote about recently, which suggests that humans are teaching AI to lie and scheme.

In any case, it’s prudent to regard AI productions with skepticism. My conversations with Google Bard clearly elicited confabulation. Also, it didn’t get humor, so I wouldn’t use it as a conversational tool, given that I’m prone to kidding around. As far as trusting AI, I probably wouldn’t trust it as far as I could throw it.