I saw an article about the unreliable ability of humans to detect digital deepfakes in audio and video productions (Mai KT, Bray S, Davies T, Griffin LD. Warning: Humans cannot reliably detect speech deepfakes. PLoS One. 2023 Aug 2;18(8):e0285333. doi: 10.1371/journal.pone.0285333. PMID: 37531336; PMCID: PMC10395974.).
I was a little surprised. I thought I was pretty good at detecting the weird cadence of Artificial Intelligence (AI) speech patterns, which I think I can distinguish pretty well. Maybe not.
And there are some experts who are concerned about AI’s ability to mimic written and spoken grammar—but it continues to make stuff up (called “hallucinations”). In fact, some research shows that AI can display great language skills but can’t form a true model of the world.
And the publisher of the book (“Psychosomatic Medicine: An Introduction to Consultation-Liaison Psychiatry”) that I and my co-editor, Dr. Robert G. Robinson, MD wrote 14 years ago is still sending me requests to sign a contract addendum that would allow the text to be used by AI organizations. I think I’m the only who gets the messages because they’re always sent to me and Bob—as though Bob lives with me or something.
Sometimes my publisher’s messages sound like they’re written by AI. Maybe I’m just paranoid.
Anyway, this reminds me of a blog post I wrote in 2011, “Going from Plan to Dirt,” which I re-posted last year under the title “Another Blast from the Past.” Currently, this post is slightly different although it still applies. I don’t think AI can distinguish plan from dirt and sometimes makes up dirt, simply put.
And if humans can’t distinguish the productions by AI from those of humans, where does that leave us?
