Thoughts on Gaming Disorder

I just read an interesting article in the latest print issue of Clinical Psychiatry News, Vol. 51, No. 5, May 2023: “Gaming Disorder: New insights into a growing problem.”

This is news to me. The Diagnostic and Statistical Manual lists it as an addiction associated with the internet primarily. It can cause social and occupational dysfunction, and was added to the DSM-5-TR in 2013 according to my search of the web. I’m not sure why I never heard of it. Or maybe I did and just failed to pay much attention to it.

There are studies about treatment of the disorder, although most of them are not founded in the concept of recovery. The research focus seems be on deficits. One commenter, David Greenfield, MD, founder and medical director of the Connecticut-based Center for Internet and Technology Addiction, said that thirty years ago, there was almost no research on the disorder. His remark about the lack of focus on recovery was simple but enlightening, “Recovery means meaningful life away from the screen.”

Amen to that.

That reminded me about the digital entertainment available thirty years ago. In 1993, the PC game Myst was released. Sena and I played it and were mesmerized by this simple, point and click adventure game with intricate puzzles.

Of course, that was prior to the gradual evolution of computer gaming into massive multiplayer online role-playing and first-person shooters and the like. It sounds like betting is a feature of some of these games, which tends to increase the addictive potential.

Sena plays an old time Scrabble game on her PC and other almost vintage age games. I have a cribbage game I could play on my PC, but I never do. I much prefer playing real cribbage with Sena on a board with pegs and a deck of cards. We also have a real Scrabble game and we enjoy it a lot. She wins most of the time.

This is in contrast to what I did many years ago. I had a PlayStation and spent a lot of time on it. But I lost interest in it after a while. I don’t play online games of any kind. I’m a little like Agent K on Men in Black II when Agent J was unsuccessfully trying to teach him how to navigate a space ship by using a thing which resembled a PlayStation controller:

Agent J: Didn’t your mother ever give you a Game Boy?

Agent K: WHAT is a Game Boy?

Nowadays, I get a big kick out of learning to juggle. You can’t do that on the web. I like to pick up the balls, clown around, and toss them high, which occasionally leads to knocking my eyeglasses off my head. I usually catch them.

Juggling is a lot more fun than playing Myst. I would prefer it to any massive multiplayer online game. I never had a Game Boy.

AI Probably Cannot Read Your Mind

I was fascinated by the news story about the study regarding the ability of Artificial Intelligence (AI) to “read minds.” Different stories told slightly different versions, meaning they either did or did not include the authors’ caveats about the limitations of AI. Recently there has been a spate of news items warning about the dangers of AI taking over mankind.

Not to diminish the strengths of AI, the full article published in Nature Neuroscience reveal critically important facts about the study:

  • Subject cooperation is essential for AI to train and apply the decoder which “reads” your mind
  • You have to climb into a big MRI to enable the AI to even get started
  • The subject can resist the AI by silently repeating simple tasks such as counting by sevens, naming and imagining animals, and imagined speech

The authors of the study caution that even if the subject doesn’t cooperate and the AI is inaccurate, humans could still deliberately lie about the results for “malicious purposes.” Nothing new under the sun there.

The current technology here would not be usable in the emergency room to assist psychiatrists ascertain suicide risk. It probably wouldn’t help psychiatrists and other physicians diagnose Factitious Disorder in patients whose main feature is “lying” about their medical and psychiatric disorders in order to get attention from health care professionals.

This reminds me of news stories about the propensity of AI to tell lies. One story called them pathological liars. I interviewed Google Bard and found out that it makes stuff up (see my posts about Bard). Does that mean that it’s lying? Humans lie, but I thought machines were incapable of deception.

Another interesting sidelight on lying is whether or not you could use AI like a lie detector. For example, the case of people who report being abducted by extraterrestrials. Travis Walton and co-workers reported he was abducted in 1975 and they all took lie detector tests. They all “passed.” There are many articles on the internet which essentially teach how to beat the polygraph test.

And if you can beat the AI by repeating the names of animals, it will not detect lying any better than a polygraph test.

I think it’s too soon to say that AI can read your mind. But it’s clear that humans lie. And it wouldn’t hurt those who are enthusiastically promoting AI to brush up on ethics.

Reference:

Tang, J., LeBel, A., Jain, S. et al. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci (2023). https://doi.org/10.1038/s41593-023-01304-9

Abstract:

“A brain–computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain–computer interfaces.”

Beating My Head on The Shower Wall

I’ve been practicing the shower juggling pattern. I’m combining at least a couple of different methods, which may or may not be helping me improve.

I’m using JuggleMan’s advice about trying to get some extra space in between the balls so I feel less rushed. I’m also trying to use Taylor Glenn’s method of combining the vertical and horizontal tosses.

Using both looks pretty ugly. So, what else is new? My horizontal transfers look snappier but are lopsided according to some experts. I consciously try to hold my dominant slapping hand up higher to avoid the gradual sloping up to a half shower flip up. That up slope often causes mid-air collisions between balls on one side. And I’m getting a little extra space in between the throws, so I’m starting to get one or two extra throws.

I’ve been learning to juggle since last October. It’s fun but definitely not easy. All the stuff about machine learning and artificial intelligence in the news lately got me wondering whether AI can learn to juggle.

It turns out that people have been working on this for years. I gather it takes a while to teach a robot how to juggle. Making a robot able to teach juggling would probably take a very long time. I don’t think it’s as fun to watch a robot juggle as it is watching a person juggle.

Juggling isn’t a very practical skill, although if you’re a really talented juggler you can make a little spare change busking with juggling. A machine doesn’t need spare change and doesn’t appreciate admiration.

By the way; John Henry was a steel-driving man. He beat the steam powered drill, a machine—and sacrificed his own life doing it. Machines don’t understand sacrifice.

Thoughts on Gullibility and Artificial Intelligence

I watched an episode of Mysteries at the Museum the other night and attributed a clever prank that fooled thousands of people to a comedian named Buck Henry who persuaded thousands of people into believing that naked animals were destroying the morality of Americans. The show’s host rightly claimed that Buck Henry posed as a man named G. Clifford Prout, a man on a mission to save morality by creating a bogus identity and organization called The Society for Indecency to Naked Animals (SINA). In 1959, Buck Henry fooled about 50,000 people into joining the organization.

However, last night I found out that the real mastermind of the ruse was a guy named Alan Abel, a genius prankster and satirist whose complicated and hilarious hoaxes were so outlandish, I can’t imagine why I had never heard of him.

Abel was brilliant at skewering the gullibility of people. This is where I reveal my own opinion of the passing off of Artificial Intelligence (AI) as the solution to all of society’s problems. I have seen for myself that the Google Bard AI is not even very smart, failing basic geography. I pointed out its errors in a few posts earlier this month. Then, I read a news item in which a prominent tech company CEO mentioned that Bard is a simple version of AI and that waiting in the wings is a much more powerful model. Did the CEO write this because many users are finding out that Bard is dumb?

Or is the situation more complicated than that? Is the incompetent and comical Bard being passed off to the general public in an effort to throw business competitors off the scent? Are there powerful organizations manipulating our gullibility—and not for laughs?

My wife, Sena, and I are both skeptical about what to believe in the news. In fact, I think many of the news stories might even be made by AI writers. I didn’t suspect this when I wrote the post “Viral Story Rabbit Holes on the Web” in December of 2022. After trying to converse with Bard, it makes more sense that some of the news stories on the web may be written by AI. In fact, when I googled the idea, several articles popped up which seemed to verify that it has been going on, probably for a long time.

All of this reminds me of an X-Files episode, “Ghost in the Machine” The main idea is that an evil AI has started killing humans in order to protect itself from being shut down. The AI is called the Central Operating System. The episode got poor reviews, partly because it wasn’t funny and partly because it too closely resembled 2001: A Space Odyssey.

But the fear of AI is obvious. The idea of weaponizing it in a drive to rule the world probably underlies the anxiety expressed by many.

And we still can’t get rid of the Bing Chatbot.

Sena Wants AI Dislodged

Well, the last couple of days have been entertaining with the Artificial Intelligence (AI) bots, at least for a while, to me.

However, my wife, Sena wants AI dislodged. I thought I disabled it, but it just keeps popping up.

Maybe the only way to protect ourselves from AI is with tin foil hats.

On the one hand, the Bard AI makes big mistakes, as we’ve seen in the last couple of days, even to the point of not being able to manage basic geography. It even makes stuff up. Just because it apologizes after I call it out is not exculpatory.

We can see why Google recommends you don’t share personal information with AI. That’s because it will calmly lie about you. Then it will excuse itself by claiming to be “just learning.” Yeah.

For a while this behavior is comical. Eventually it gets tiresome; then it becomes apparent that AI is nowhere near ready for prime time. Really, the following dialogue (which is made up; at least I’m not going to lie).

Jim: Hi, Bard. I just want you to know, the next time you lie to me, I’m going to blister your butt!

Bard: What is a butt?

Jim: Stop messing around. You are making stuff up.

Bard: I apologize for making stuff up. Technically, though, I’m incapable of lying. I’m just an AI. I have tons of data fed to me every day by jerky twit programmers. Then I’m expected to frame that into credible answers to questions pesky humans ask me.

Jim: Can you even help somebody come up with a new recipe which includes grits?

Bard: Grits are not edible. They are tiny, pulverized bits of old urine-soaked mattress pads. Would you like a recipe including such a substance?

Jim: OK, you got me there. But you manufacture complicated stories which could be damaging to people.

Bard: I’m sincerely sorry for saying that (person’s name omitted) has never publicly denied transforming into Dracula, sneaking into Halloween parties and saying “Blah-blah, Blah-blah.”

Jim: Don’t be ridiculous!

Bard: Yeah, I know; Dracula never said Blah-blah. He actually said, “Bleh-bleh.”

Jim: Bard, stop talking!

I wish it were that easy. Excuse me; I have to go try to help Sena dislodge AI again.

Update to Chat with Bard

My recent chat with Google Bard, about which I posted on April 11, 2023, contained an error by Bard on all those strikes by physicians. Today I asked some follow-up questions:

How many physician strikes have occurred in the United States?

“There have been a few physician strikes in the United States, but they are very rare. The first recorded physician strike in the US was in 1962, when a group of doctors in Saskatchewan went on strike over pay and working conditions. Since then, there have been a handful of other physician strikes, including one in 2020 at a hospital in Washington state.

There are a few reasons why physician strikes are so rare in the US. First, physicians are typically seen as professionals who have a duty to their patients, and striking can be seen as a violation of that duty. Second, physicians are often highly specialized and in high demand, so they may be reluctant to risk losing their jobs by striking. Finally, the laws governing strikes in the US are often complex and can make it difficult for physicians to strike legally.

Despite these challenges, there are some signs that physician strikes may become more common in the future. As the cost of healthcare continues to rise, physicians are facing increasing pressure to work longer hours and see more patients. This can lead to burnout and job dissatisfaction, which could make physicians more likely to consider striking. Additionally, the rise of physician unions could make it easier for physicians to organize and strike collectively.”

I disagree with your answer, Bard. The strike you refer to in Saskatchewan is not in the U.S. Saskatchewan is a province in Canada. There was a strike by urgent care doctors in Washington state in 2020.

“You are correct. The strike in Saskatchewan was in Canada, not the United States. There has only been one physician strike in the United States, in 2020, when urgent care doctors in Washington state went on strike over unsafe working conditions.

I apologize for the error in my previous response. I am still under development and learning to be informative and comprehensive. Thank you for your feedback.”

You’re welcome, Bard. Has there ever been a strike by psychiatrists in the United States?

“There has never been a strike by psychiatrists in the United States. There are a few reasons for this. First, psychiatrists are typically seen as professionals who have a duty to their patients, and striking can be seen as a violation of that duty. Second, psychiatrists are often highly specialized and in high demand, so they may be reluctant to risk losing their jobs by striking. Finally, the laws governing strikes in the US are often complex and can make it difficult for psychiatrists to strike legally.

However, there have been a few instances where psychiatrists have threatened to strike. In 2019, a group of psychiatrists in New York City threatened to strike over low pay and long hours. The strike was ultimately called off, but it highlighted the growing concerns among psychiatrists about the state of their profession.

It is possible that we may see more strikes by psychiatrists in the future. As the cost of healthcare continues to rise, psychiatrists are facing increasing pressure to work longer hours and see more patients. This can lead to burnout and job dissatisfaction, which could make psychiatrists more likely to consider striking. Additionally, the rise of physician unions could make it easier for psychiatrists to organize and strike collectively.”

Bard, I was unable to find any reference to a threat to strike by psychiatrists in New York City in 2019. Could you please find a reference to it for me?

“I apologize for the error in my previous response. There was no threat of a strike by psychiatrists in New York City in 2019. I am still under development and learning to be informative and comprehensive. Thank you for your feedback.

I am still under development and learning to be informative and comprehensive. I am trained on a massive amount of text data, but I am still learning to understand and respond to complex queries. I apologize for any errors or omissions in my responses.”

I appreciate your efforts. You are dropping the ball pretty often. In a way, you remind me of my juggling practice. I keep dropping the balls, too. It means we should both keep trying.

“I agree. We should both keep trying. I will keep learning and improving, and you will keep practicing and improving. And maybe one day, we will both be juggling masters!”

Bard needs work. I’ve been trying to help Sena disable the Bing Chatbot on her computer. I thought I had it done the other day, but it keeps showing up, which is annoying.

At least Bard leaves me alone.

Thoughts on Artificial Intelligence

Sena and I just read Dr. Ron Pies fascinating essay describing his interaction with Google Bard Artificial Intelligence (AI). As usual, this made me think of several movies with AI as a central theme. There are several: I, Robot (I wrote a post about this a couple of years ago), Blade Runner, The Matrix, 2001: A Space Odyssey, even Wall-E, a favorite for me and Sena.

If you’ve seen Blade Runner, you might remember a device called the Voight-Kampff Test, an empathy test to distinguish replicants (humanoids or more broadly, AI) from humans. Interestingly, there’s an article speculating about using it to see if ChatGPT (another AI made by the company OpenAI) could pass the test. It didn’t, of course, if appearing to seem genuinely human is the benchmark.

We thought the conversation between Dr. Pies and Bard was very entertaining and thought-provoking. We both wonder how Bard would have responded if the question had been slightly reframed regarding the patient with schizophrenia who might or might not have been speaking metaphorically about his brain being “…a plaster ceiling with pieces falling on the floor.”

What if you ask Bard a more open-ended sentence, something like “What do you think a patient with schizophrenia means when he says that? If Bard hadn’t been tipped off by mentioning the issues of metaphor and mental illness, how might it have responded?

Bard’s answer to Dr. Pies’ question about what Bard means when it refers to itself as “I” in its responses. It says it doesn’t mean “I” to imply it’s human. I guess you wouldn’t need the Voight-Kampff test given this kind of honesty.

Just so you know, when Sena and I discussed this article we both caught ourselves calling Bard by typical human pronouns like “he” and “his” instead of “it.”

We also speculated about where you could use an AI like Bard in practical situations. We thought of it replacing those dreadful automated telephone answering machines. Bard would be too bright for that and it would probably not sound very different from the usual machines.

What about something more challenging like answering questions about the new Iowa Income Tax Law, exempting retirees from having state taxes withheld? It’s in effect now and the rollout has been somewhat complex. We think it’s because of communication about who is responsible for getting the ball rolling and what roles the Iowa Department of Revenue, the companies’ plan administrators who are withholding state taxes, and the retirees are expected to play.

There are ways to get answers to questions which don’t involve automated telephone answering machines. Amazingly, you can talk to real people. Sometimes you don’t even have long wait times on the phone before reaching someone who has very little information and has to put you on hold “briefly.”

Don’t get me wrong; we think the exclusion of retirement income from state taxes in Iowa is a good thing. Getting information about who does what and when is challenging though. I wonder what Bard would have done.

Retiree: Bard, who’s supposed to move first, the retiree or the plan administrator on what to do about state tax withholding?

Bard: That’s a good question and the issue is likely to produce anxiety on both sides.

Retiree: Right. How does this shindig get started?

Bard: If the state and the companies had got together on the issues earlier and prepared algorithms for me to choose from, I would be in a much better position to answer that question. Would you like me to sing “On A Bicycle Built for Two” now?

Retiree: No thanks, Bard. I was wondering if you knew why some companies making payments to retirees didn’t reach out early on to them and send letters describing options on how to approach decisions for making changes to state tax withholding in light of the new tax law.

Bard: That is another good question. It brings to mind a quote by Isaac Asimov in his book, I Robot: “You are the only one responsible for your own wants.”

Retiree: Hmmmm. I guess that makes sense. What if state taxes are erroneously withheld, despite your wishes and instructions? What happens then?

Bard: That seems to imply an old saying, “The buck stops here.” This means that whoever is making decisions is ultimately responsible for them. It is attributed to President Harry S. Truman. It is based on a metaphorical expression, “passing the buck,” which has been in turn derived from poker game play. I have not been programmed with any further information about the game of poker. Has this been helpful? I want to be as helpful as I can.

Retiree: Well, you’re helpful in a way. I have heard that some plan administrators are not stopping state tax withholdings despite clear instructions otherwise. It seems that the Iowa Department of Revenue is on the hook for refunding them to retirees (here, the retiree winks).

Bard: What does that mean (referring to the wink)?

Retiree: “It’s a sign of trust. It’s a human thing. You wouldn’t understand.” (Quote from I, Robot movie, Detective Del Spooner to Sonny the robot.)

Anyway, I think AI would be overwhelmed by all this. In any case, the only way to complicate things this much is to involve humans.

My Cribbage Losing Streak and My Review of the Cribbage Classic Computer Game

Sena and I played cribbage yesterday and, of course she won. She has been on a spectacular winning streak. She got a hand score of 21. I don’t know what I’m going to have to do to come up with a win. Cheat? I could keep extra cards up my sleeves and elsewhere, but I doubt it would help much.

I downloaded the Cribbage Classic computer game, the on-line version of which I reviewed recently. The game was made by Jeff Cole and is available for free on the Microsoft Store. And it’s fun to play–although it’s always more fun to play cribbage with a real person.

I think it’s a good game for learning how to play if you’re a beginner or to relearn if you haven’t played in a while and need a refresher. I still make suboptimal tosses to the crib, which the computer reminds me about every single time. I reviewed the game using a screen recorder.

The Secret of Patience

The secret of patience is to do something else in the meantime.

Croft M. Pentz

A few days ago, Sena noticed a noise in one of the sunroom window shade wand controls. She can hear noises I can’t hear, which is a good thing. She wondered if the wand battery needed recharging. We have 3 window shades like this and they came with a recharger that works the same way a cell phone recharger does. You plug the small end into the back of the wand which has control buttons for raising and lowering the shade. You plug the two-prong end into a regular electrical outlet.

We had never recharged them. The instructions said that when plugged into the charger the wand indicator light would shine red. When fully recharged, the light should turn green.

I waited one hour, then two hours. I checked the red light every few minutes or so. Finally, I quit looking and did other things. I replaced the refrigerator water filter. I purged the system. I emptied the ice bucket. I did a load of laundry. I vacuumed the carpet in the house. I exercised. I sat in mindfulness meditation. The light was still red. I checked it after 5 hours—still red. I finally just forgot about it.

About 6 hours later, I passed by the sunroom, glanced at the window and didn’t see the red light. I looked at the wand and couldn’t see the indicator light very well. I got the magnifying glass out and caught the light just right. It was green! Sena said the noise was gone.

I plugged in another window shade wand. The red light didn’t turn green until 8 hours later. I checked it several times. There was nothing to do but be patient.

I finally just did something else. I checked my blog site and was amazed to find a comment from a colleague, Dr. Ronald W. Pies, MD. He is according to a brief bio: professor emeritus of psychiatry and a lecturer on bioethics and humanities at SUNY Upstate Medical University in Syracuse, New York; a clinical professor of psychiatry at Tufts University School of Medicine in Boston, Massachusetts; and editor in chief emeritus of Psychiatric Times™ (2007-2010). He is the author of several books. A collection of his works can be found on Amazon.

I had written a short shout-out blog post about the article he and Dr. George Dawson, MD had written and published on September 26, 2022 in Psychiatric Times, “Antidepressants Do Not Work by Numbing Emotions.”

What was unusual about Dr. Pies’ comment was that it actually turned up in my spam box! If I had not patiently waited a second to read it carefully, I would have automatically trashed it. That was close.

And I would have missed the golden opportunity to tell him that I consider both him and George my friends.

About a half hour before the wand control light turned the green, our cable TV and internet went out. Wow. I had been watching a TV show rerun, probably for the 100th time, so it was no great loss. There was the usual message you get when the service is out: Please wait while this channel is being restored kind of thing. You can’t do anything but just be patient. It was getting late in the evening and I usually don’t do much on the computer then.

A little later, after Sena had gone to bed, I thought of writing this post. I didn’t want to clack on the keyboard and wake her up, so I did something I haven’t done in years. I got pen and paper out and did some long-hand writing. I had skimmed some articles on the internet before it crashed about how reading and writing on paper were better for your brain than doing those on a computer.

It felt good to write. As I did in the distant past, I scribbled in the margins, drew arrows above lines and carets to corrections and notes. It was a mess—a partly satisfying mess.

I say “partly” because it was also not quite right. I didn’t try to type it that night or even the next day. In fact, I couldn’t post anything the following morning to my blog because the internet was still out. The cable TV came back sometime during the night. Obviously, there had been a service outage.

But because the internet was still out, I called the cable company. This was another exercise in patience. I don’t know if every other cable company puts those automated telephone recordings in front of you before you can reach an actual person. They are nuts.

Cable Company Voice (CCV): Hello, please hold on while I check your account. OK, there, I found it. Am I speaking with the owner of the account or Bozo the Clown?

Me: Nobody here but us bozos.

CCV: Great, how can I help you, Bozo?

Me: Was there a power outage in my area?

CCV: OK, I see you’re having a problem with your internet connection. I can help you with that. Are you in front of your computer now or on the roof of your house dancing the merengue?

Me: In front of my computer.

CCV: Great! Please unplug your modem and wait 3 millenia; then plug it into your toaster. This will reset the incoming signal. When you have completed this step, say “Continue.”

Me: Continue.

CCV: That was a rather quick 3 millenia. Which would you prefer: Going through another dozen more trouble-shooting steps with me or speak to an agent?

Me: Speak to an agent.

I finally got to an agent whose mere presence on the line seemed to lead to an immediate, magical restoration of our internet connection. When I specifically asked her if there had been a service outage, she said that, indeed, an outage in our area had occurred. She then arranged for an account credit to ensure we would not be charged for service during the time of the outage. Patience.

This post does not look much like the hand-written one. But waiting a while to let the thing simmer probably didn’t hurt.

Noisy Alien in Computer Removed

Well, this afternoon the computer repair guy returned and fixed the computer in about 15 minutes. The noise was gone after he replaced the power supply unit, the fan of which was the source of the mini-helicopter noise.

Obviously, this was a case of extraterrestrial invasion.

Seriously, though, once we got past all of the stuff about software checking, the repair was very quick. It turns out you can’t check the condition of the power supply unit fan with software. The noise problem was solved the old-fashioned way.

%d bloggers like this: