Recently I've been approached by different news organisations to comment on deepfaked images and videos. In most cases they already know whether the thing is a fake or not, but they want to know why. It's been a pretty fascinating thing to be tasked with, honestly, and some of the examples completely caught me by surprise (warning: Daily Mail link). Many of us see faked images on a daily basis now, but there's not a lot of writing about how fakes are detected other than one-liner folk knowledge on places like Twitter. I thought I'd write a bit about how I approach the problem.
I see very smart people regularly share TikTok content that is clearly staged or fake and believe it's real - the Internet is and always has been full of lies, and that's also where a lot of its charm and playfulness comes from! I'm only saying all this because I don't want you to despair that this is the end of truth - you will adjust, you will learn to spot new things, and you'll also learn to not trust certain sources that you maybe did trust before. Ultimately that might be a good thing, because pre-genAI we probably fell for a lot of lies without even realising, so a bit of a wakeup call might help us in the long run.
This is a key takeaway, for me. I sometimes find myself browsing social media and taking innocent videos or whatnot at face value, until something obviously wrong jumps out, or a comment casts doubt, at which point I blink and think "Oh yeah, that's fake/staged. Like, obviously. How did I initially just accept that?"
It's a bit exhausting to have to always be on your guard about bad information, but it's so easy to see how not doing so can result in forming ideas in our minds that are based on untruths. When the James Somerton story broke, it gave me a bit of an epistemological crisis - I remember reading things on Twitter that, in retrospect, were (incorrect) things he said that his followers repeated. I had no idea what the source was, but enough people in the same cliques as me were saying it that it seemed reasonable to me! We want to trust the people around us, but they're also vulnerable to misinformation. Eventually, that falsehood stops being something you learned somewhere, and becomes something that you "know," and that everyone around you "knows."
I realize this is a bit disconnected from the original point, but it got me thinking about the broader challenge we face of sorting through everything we're exposed to and determining what we actually believe to be true. It's impossible to verify everything we hear from first principles, so we take shortcuts - we defer to people and institutions we believe to be trustworthy, we more readily accept things that conform to our worldview, we skim over things that don't seem important. I think it's impossible, on some level, to ever truly get away from that, but always keeping in mind that those shortcuts are just that is a good start, I think.
(Also, the quoted post is excellent and I recommend reading it if you're at all interested in learning a bit about generative AI and how to critically examine media to spot it.)
