Thoughts On Laptop Computers

We bought a laptop computer. It has been years since I’ve used one. I forgot how exasperating a touchpad is. Luckily, we have a spare wireless mouse and a USB port. The laptop is slim and very light, like most laptops these days.

I remember the first “laptop” I had early in my career as a consulting psychiatrist. I think it weighed about 2-3 times what the modern ones weigh nowadays. I think I could have stopped a thief from taking it from me by whacking him over the head with it.

If I remember correctly, it had a slot for floppy discs and another for disc media. It developed a hardware problem which forced me to box it up and send it back to the manufacturer for repairs. I don’t remember how long I kept it after that.

The new laptops don’t have any internal optical drives built into them.

I read a tech article in which the author’s opinion about the gradual disappearance of internal optical drives and other physical media for laptops was probably the result of large companies finding out they could make more money by charging subscription fees for digital media.

Microsoft comes to mind.

About That Artificial Intelligence…

I’ve got a couple of things to get off my chest about Artificial Intelligence (AI). By now, everyone knows about AI telling people to put hot glue on pizza and whatnot. Sena and I talked to a guy at an electronics store who had nothing but good things to say about AI. I mentioned the hot glue thing and pizza and it didn’t faze him.

I noticed the Psychiatric Times article, “AI in Psychiatry: Things Are Moving Fast.” They mention the tendency for AI to hallucinate and expressed appropriate reservations about its limitations.

And then I found something very interesting about AI and Cribbage. How much does AI know about the game? Turns out not much. Any questions? Don’t expect AI to answer them accurately.

The SD Card Caper

I have an SD card (more commonly called just a “memory card”) for my camera and the other day I couldn’t download the videos to my computer using the SD card reader in the tower. The card reader is just a slot-shaped port in the tower, above the USB ports. SD stands for Secure Digital. It’s really secure when you can’t download any videos or pictures.

This had never happened before. Naturally I turned to the internet for guidance, which was my first mistake. I never saw so many web sites with confusing advice, some of which involve zip lines.

Most of the web sites assumed I could see the icon for the SD card on my computer screen, but I couldn’t. Several web site help sites (hah!) breezily suggested I rename the disc or update the driver, or contact the extraterrestrials who manufactured the item as if I wanted them to know where I am so they can abduct me again.

This suggested the problem was probably the SD card reader in my computer—The XPS 8950, my nearly new computer which has already had major parts replaced and which is now out of warranty.

Only a couple of websites were on the right track about the SD card reader itself. One expert said that if I blew in the slot (That’s right! Not as dumb as it sounds; dust can be a problem) and wiped the card with a Q-Tip, and it still didn’t work, I should try it in a different computer. If it worked, then the problem was probably the card reader. It turns out you can blow on the SD card reader or the SD card until you’re blue in the face if the card is not detected in the Device Manager or any Device. If the card is dead, you get a new card, “and let it go.” Those are the exact words from that expert. Do I also have to sit in the lotus position?

 Anyway, I did try the card in the SD card reader in my wife’s computer. It worked!

But if the SD card works in another device, the problem could be a dead SD card reader. What should you do?

Well, when a couple of fans went out in my tower when it was under warranty, a repair guy came over, took the tower apart and replaced the fans. My machine is out of warranty and I don’t want to go through the same hassle of negotiating with the manufacturer to work out a time compatible with the repair guy’s bowling league schedule to drive to our house from British Columbia.

On the other hand, could I replace the SD card reader in the tower itself? A long time ago, I replaced a fan in my computer, which reminds me; you should never install oscillating fans in a computer.

Here’s the thing—I found a web page that fits my situation exactly, right down to the make and model of the tower. It turns out that it’s probably not possible to replace the SD card reader in the tower without replacing the mother board, which you, as a home user, should not attempt unless you have been drinking heavily.

What I found out is that combination USB with SD card readers are available and all you have to do is stick the SD card in the reader slot on the unit and plug the USB into the port on the tower. The whole thing fits in the palm of your hand.

Now our toaster doesn’t work.

Press And Hold on The Keurig Coffee Maker

We got our new coffee maker. It’s a Keurig K-Supreme Plus, and it’s as fancy as the name sounds. It’s compact enough to save room for our other coffee maker, a Black & Decker, a model with a carafe and which you have to press and hold the “On” button to start the cleaning mode.

It has options for making your coffee stronger and hotter and you can save your choices.

You can save 3 favorites. You have to press and hold the “Favorite” button until you see the word “saved” in the little window.

I emphasize the “press and hold” because if you don’t strictly obey the rule, you can wind up thinking your brand new appliance is defective.

It does make the coffee hotter. But I think Sena will be looking into other options for cups which will keep the brew hotter for a longer time.

The Keurig coffee makers are normally pricey but Sena got a bargain. And there is nothing wrong with the press and hold maneuver when it comes to your wallet—as in hold it shut.

The Dragon Breathes Fire Again

Sena and I saw a news video about a technology called “DAX” which uses Artificial Intelligence (AI) the other day which promises to reduce or even eliminate pajama time for physicians trying to get their clinical note dictations done during the day instead of taking them home for several more hours of work.

The video was a demo of the technology, which looked like it recorded a clinical interview between the doctor and the news reporter. I didn’t see how exactly DAX was recording the interview without obvious audio equipment. Was it doing it through the smartphone speaker? This was very different from how I and many other clinicians dictated their notes using a headphone set at their desks in front of their desktop computers. It not only records but transcribes the interview.

Later, I discovered that DAX stands for Dragon Ambient Experience, made by Nuance which was acquired by Microsoft in 2022. I posted about Dragon products and their limitations last year. The product often produced hilarious mistakes during dictation which required careful editing. Sometimes more errors turned up after you completed it and these were visible in the patient’s medical record, which would then need to be corrected.

Several years ago, I remember talking to somebody from Dragon on the phone about the problems I was having. She was a little defensive when I told her I’d been having difficulty with Dragon for quite a while because it made so many mistakes.

A recent article on the web revealed that the errors continue with DAX. According to the article, “…it will make mistakes. Sometimes it might omit clinical facts; sometimes it may even hallucinate something.” I remember trying to communicate with the Google Bard AI, which seemed to do this pretty often. It made stuff up.

DAX is not cheap. The article reveals that one hospital pays $8,000-$10,000 per year per physician to use it. And skeptics worry that the system has too many bugs in it yet, which can lead to bias and inaccurate information which could negatively affect patient outcomes.

A recently published JAMA network article also urges caution in adoption of this sort of AI-assisted technology (Harris JE. An AI-Enhanced Electronic Health Record Could Boost Primary Care Productivity. JAMA. Published online August 07, 2023. doi:10.1001/jama.2023.14525).

In this case, I think it’s appropriate to say “I told you so.”

We Are All Still Learning to Play Pong

I noticed an article the other day about Monash University in Australia getting funding for further research into growing brain cells onto silicon chips and teaching them how to play cribbage.

Just kidding, the research is for teaching the modified brain cells tasks. They succeeded in teaching them goal-directed tasks like how to play the tennis-like game Pong last year. You remember Pong from the 1970s? Shame on you if you don’t. On the other hand, that means you probably didn’t frequent any beer taverns in your hometown while you were growing up—or that you’re just too young to remember.

The new research program is called Cortical Labs and has hundreds of thousands of dollars in funding. The head of the program, Dr. Razi, says it combines Artificial Intelligence (AI) and synthetic biology to make programmable biological computing platforms which will take over the world and bring back Pong!

It’s an ambitious project. The motto of Monash University is Ancora Imparo, which is Italian for “I am still learning.” It links humility and perseverance.

There’s a lot of suspicion out there about AI and projects like the Pong initiative in Australia. It could eventually grow into a vast industry run by robots who will run on a simple fuel called vegemite.

Shame on you if you don’t know what vegemite is!

Anyway, it reminds me that I recently finished reading Isaac Asimov’s book of science fiction short stories, “I, Robot.”

The last two stories in the book are intriguing. Both “Evidence” and “The Evitable Conflict” are generally about the conflict between humans and AI, which is a big controversy currently.

The robopsychologist, Dr. Susan Calvin, is very much on the side of AI (I’m going to use the term synonymously with robot) and thinks a robot politician would be preferable to a human one because of the requirement for the AI to adhere to the 3 Laws of Robotics, especially the first one which says AI can never harm a human or allow a human or through inaction allow a human to come to harm.

In the story “Evidence,” a politician named Stephen Byerley is suspected of being a robot by his opponent. The opponent tried to legally force Byerley to eat vegemite (joke alert!) to prove the accusation. This is based on the idea that robots can’t eat. This leads to the examination of the argument about who would make better politicians: robots or humans. Byerley at one point asks Dr. Calvin whether robots are really so different from men, mentally.

Calvin retorts, “Worlds different…, Robots are essentially decent.” She and Dr. Alfred Lanning and other characters are always cranky with each other. The stare savagely at one another and yank at mustaches so hard you wonder if the mustache eventually is ripped from the face. That doesn’t happen to Calvin; she doesn’t have a mustache.

At any rate, Calvin draws parallels between robots and humans that render them almost indistinguishable from each other. Human ethics, self-preservation drive, respect for authority including law make us very much like robots such that being a robot could imply being a very good human.

Wait a minute. Most humans behave very badly, right down to exchanging savage stares at each other.

The last story, “The Evitable Conflict” was difficult to follow, but the bottom line seemed to be that the Machine, a major AI that, because it is always learning, controls not just goods and services for the world, but the social fabric as well while keeping this a secret from humans so as not to upset them.

The end result is that the economy is sound, peace reigns, the vegemite supply is secure—and humans always win the annual Pong tournaments.

Thoughts on Gaming Disorder

I just read an interesting article in the latest print issue of Clinical Psychiatry News, Vol. 51, No. 5, May 2023: “Gaming Disorder: New insights into a growing problem.”

This is news to me. The Diagnostic and Statistical Manual lists it as an addiction associated with the internet primarily. It can cause social and occupational dysfunction, and was added to the DSM-5-TR in 2013 according to my search of the web. I’m not sure why I never heard of it. Or maybe I did and just failed to pay much attention to it.

There are studies about treatment of the disorder, although most of them are not founded in the concept of recovery. The research focus seems be on deficits. One commenter, David Greenfield, MD, founder and medical director of the Connecticut-based Center for Internet and Technology Addiction, said that thirty years ago, there was almost no research on the disorder. His remark about the lack of focus on recovery was simple but enlightening, “Recovery means meaningful life away from the screen.”

Amen to that.

That reminded me about the digital entertainment available thirty years ago. In 1993, the PC game Myst was released. Sena and I played it and were mesmerized by this simple, point and click adventure game with intricate puzzles.

Of course, that was prior to the gradual evolution of computer gaming into massive multiplayer online role-playing and first-person shooters and the like. It sounds like betting is a feature of some of these games, which tends to increase the addictive potential.

Sena plays an old time Scrabble game on her PC and other almost vintage age games. I have a cribbage game I could play on my PC, but I never do. I much prefer playing real cribbage with Sena on a board with pegs and a deck of cards. We also have a real Scrabble game and we enjoy it a lot. She wins most of the time.

This is in contrast to what I did many years ago. I had a PlayStation and spent a lot of time on it. But I lost interest in it after a while. I don’t play online games of any kind. I’m a little like Agent K on Men in Black II when Agent J was unsuccessfully trying to teach him how to navigate a space ship by using a thing which resembled a PlayStation controller:

Agent J: Didn’t your mother ever give you a Game Boy?

Agent K: WHAT is a Game Boy?

Nowadays, I get a big kick out of learning to juggle. You can’t do that on the web. I like to pick up the balls, clown around, and toss them high, which occasionally leads to knocking my eyeglasses off my head. I usually catch them.

Juggling is a lot more fun than playing Myst. I would prefer it to any massive multiplayer online game. I never had a Game Boy.

AI Probably Cannot Read Your Mind

I was fascinated by the news story about the study regarding the ability of Artificial Intelligence (AI) to “read minds.” Different stories told slightly different versions, meaning they either did or did not include the authors’ caveats about the limitations of AI. Recently there has been a spate of news items warning about the dangers of AI taking over mankind.

Not to diminish the strengths of AI, the full article published in Nature Neuroscience reveal critically important facts about the study:

  • Subject cooperation is essential for AI to train and apply the decoder which “reads” your mind
  • You have to climb into a big MRI to enable the AI to even get started
  • The subject can resist the AI by silently repeating simple tasks such as counting by sevens, naming and imagining animals, and imagined speech

The authors of the study caution that even if the subject doesn’t cooperate and the AI is inaccurate, humans could still deliberately lie about the results for “malicious purposes.” Nothing new under the sun there.

The current technology here would not be usable in the emergency room to assist psychiatrists ascertain suicide risk. It probably wouldn’t help psychiatrists and other physicians diagnose Factitious Disorder in patients whose main feature is “lying” about their medical and psychiatric disorders in order to get attention from health care professionals.

This reminds me of news stories about the propensity of AI to tell lies. One story called them pathological liars. I interviewed Google Bard and found out that it makes stuff up (see my posts about Bard). Does that mean that it’s lying? Humans lie, but I thought machines were incapable of deception.

Another interesting sidelight on lying is whether or not you could use AI like a lie detector. For example, the case of people who report being abducted by extraterrestrials. Travis Walton and co-workers reported he was abducted in 1975 and they all took lie detector tests. They all “passed.” There are many articles on the internet which essentially teach how to beat the polygraph test.

And if you can beat the AI by repeating the names of animals, it will not detect lying any better than a polygraph test.

I think it’s too soon to say that AI can read your mind. But it’s clear that humans lie. And it wouldn’t hurt those who are enthusiastically promoting AI to brush up on ethics.

Reference:

Tang, J., LeBel, A., Jain, S. et al. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci (2023). https://doi.org/10.1038/s41593-023-01304-9

Abstract:

“A brain–computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain–computer interfaces.”

Beating My Head on The Shower Wall

I’ve been practicing the shower juggling pattern. I’m combining at least a couple of different methods, which may or may not be helping me improve.

I’m using JuggleMan’s advice about trying to get some extra space in between the balls so I feel less rushed. I’m also trying to use Taylor Glenn’s method of combining the vertical and horizontal tosses.

Using both looks pretty ugly. So, what else is new? My horizontal transfers look snappier but are lopsided according to some experts. I consciously try to hold my dominant slapping hand up higher to avoid the gradual sloping up to a half shower flip up. That up slope often causes mid-air collisions between balls on one side. And I’m getting a little extra space in between the throws, so I’m starting to get one or two extra throws.

I’ve been learning to juggle since last October. It’s fun but definitely not easy. All the stuff about machine learning and artificial intelligence in the news lately got me wondering whether AI can learn to juggle.

It turns out that people have been working on this for years. I gather it takes a while to teach a robot how to juggle. Making a robot able to teach juggling would probably take a very long time. I don’t think it’s as fun to watch a robot juggle as it is watching a person juggle.

Juggling isn’t a very practical skill, although if you’re a really talented juggler you can make a little spare change busking with juggling. A machine doesn’t need spare change and doesn’t appreciate admiration.

By the way; John Henry was a steel-driving man. He beat the steam powered drill, a machine—and sacrificed his own life doing it. Machines don’t understand sacrifice.

Thoughts on Gullibility and Artificial Intelligence

I watched an episode of Mysteries at the Museum the other night and attributed a clever prank that fooled thousands of people to a comedian named Buck Henry who persuaded thousands of people into believing that naked animals were destroying the morality of Americans. The show’s host rightly claimed that Buck Henry posed as a man named G. Clifford Prout, a man on a mission to save morality by creating a bogus identity and organization called The Society for Indecency to Naked Animals (SINA). In 1959, Buck Henry fooled about 50,000 people into joining the organization.

However, last night I found out that the real mastermind of the ruse was a guy named Alan Abel, a genius prankster and satirist whose complicated and hilarious hoaxes were so outlandish, I can’t imagine why I had never heard of him.

Abel was brilliant at skewering the gullibility of people. This is where I reveal my own opinion of the passing off of Artificial Intelligence (AI) as the solution to all of society’s problems. I have seen for myself that the Google Bard AI is not even very smart, failing basic geography. I pointed out its errors in a few posts earlier this month. Then, I read a news item in which a prominent tech company CEO mentioned that Bard is a simple version of AI and that waiting in the wings is a much more powerful model. Did the CEO write this because many users are finding out that Bard is dumb?

Or is the situation more complicated than that? Is the incompetent and comical Bard being passed off to the general public in an effort to throw business competitors off the scent? Are there powerful organizations manipulating our gullibility—and not for laughs?

My wife, Sena, and I are both skeptical about what to believe in the news. In fact, I think many of the news stories might even be made by AI writers. I didn’t suspect this when I wrote the post “Viral Story Rabbit Holes on the Web” in December of 2022. After trying to converse with Bard, it makes more sense that some of the news stories on the web may be written by AI. In fact, when I googled the idea, several articles popped up which seemed to verify that it has been going on, probably for a long time.

All of this reminds me of an X-Files episode, “Ghost in the Machine” The main idea is that an evil AI has started killing humans in order to protect itself from being shut down. The AI is called the Central Operating System. The episode got poor reviews, partly because it wasn’t funny and partly because it too closely resembled 2001: A Space Odyssey.

But the fear of AI is obvious. The idea of weaponizing it in a drive to rule the world probably underlies the anxiety expressed by many.

And we still can’t get rid of the Bing Chatbot.