What do we make of a deepfake “interview” with a text-based AI trained on the words of a real person?
This caught my attention today. Mark Zuckerberg from Facebook won’t be interviewed by The Guardian, so they:
built a Zuckerbot … trained on three years of interviews, speeches, blogposts and testimony from the Zuckerberg Files – more than 200,000 words of Zuckerverbiage in all. Guardian and Observer journalists then provided the questions, and [the company that made the bot] used the Zuckerbot to produce the answers.
https://www.theguardian.com/technology/2019/dec/22/zuckerbot-mark-zuckerberg-facebook-botnik
My first thought was “Oh, that’s clever and fun”, (and the Zuckerbot’s answers are mostly clearly nonsensical) but my second thought quickly overrode it: “Hang on, this is a deepfake, but with words. Is this ethical?”
There has been a lot of robust debate about the ethics of deepfakes. They can do some amazing stuff with images and video now. I don’t have links to hand.
But text as a medium often kinda slips under the radar.
I very rarely share photos of my kids on social media because I want to protect their privacy. But I have talked about them a lot, and this is ALSO an invasion of their privacy. Somehow it feels less so, but it could be MORE so.
Writing about their character and behaviour could be more damaging to them in future than just a picture of them walking in the woods or riding their bike for the first time.
So, words are important. And I wonder if fake words are harder to spot.
I recently heard about the wonderous new AI that produced a convincing news story about Unicorns: listen to the 99 Percent Invisible podcast about The Eliza Effect from about 33:10 – I mean, listen to it all because it’s excellent, but from 33:10 is relevant. They discuss and critique how AI could so quickly churn out fake words, articles, and even fake interviews!
Perhaps they already are?
Perhaps THIS is being written by AI, and not a human?
How would you tell?