What’s the truth? (ft. Sora 2)

Video evidence is inadmissible now. It used to be proof something happened, or not. And well, no more. We need a new way of proving what’s true and what’s not and how this’ll look like? Not sure, yet.

The appearance of LLMs marked the start of artificial content. Back in 2022, when ChatGPT launched, it felt like a niche thing for tech nerds to play with. Some of us (including myself) already had an OpenAI account and were using the GPT-3 API before ChatGPT even existed. Either way, it took a year for people to understand what this meant, and another year for people to figure out how to use it to actually get work done. One thing always stood out –

People adapt unknowingly

We’ve accepted that we don’t know whether what we read is human- or AI-written. We’ve accepted it to the point where we don’t even question who wrote the latest Economist article, because to a degree, it doesn’t matter. Someone thought of the content and somehow got it down on paper. We’ve accepted that if an entire newsroom article announcing GPT-5 is written by an LLM, it doesn’t really matter. The beginning was hard. When ChatGPT launched, people were freaked out about exactly this – and now we accept it.

People have been saying this for a while, but with Sora 2, I actually believe it. We can’t distinguish a generative video from a real one. Seriously. Ads will be created artificially, cartoons, and movies. And yes, if I told my grandmother the movie she’s watching doesn’t feature a single real actor but just compute in a giant datacenter instead, she’d wonder. But would she be upset? I don’t know. At least for ads on Instagram or YouTube, people just don’t care who made them. Neither do they about YouTube videos in general (some people simply prefer that form of getting information as opposed to reading an article). And if you’re thinking, “Well, I care”, then give it some time (sorry). Remember how much we used to care whether a text was human-written?

And if the President’s latest announcement isn’t actually him in front of the camera but just generated? Does it matter? Not really. People were commenting and screaming online about the rumor of Trump’s Charlie Kirk video maybe being AI (I’m not commenting on if it is or not), but if this is the way it’s done three times people would grow accustomed to it. And for my part, I don’t care. And it should worry me how little I care.

But?

What matters though, is that we don’t know who created a video. While it might not matter if it’s fake President or real President in front of the camera, it absolutely matters if that video is generated by the government or a third party. A few years ago, this could have started wars. The fact that now it can’t anymore because we know that there’s the chance of this being artificial, says a lot already.

Right after watching the Sora 2 announcement livestream, I went on Instagram and found myself doomscrolling (yeah, I know). I came across a video of Musk talking about starting a company with like-minded people. I follow Musk a lot and hadn’t seen that quote before and half a year ago I would have known it was real and thought “uh, interesting take.” But now? After watching the Sora 2 announcement, I just don’t know if it’s real. Seriously, it could be, but it could also not be. And generally I’m trained to distinguish. But after watching the announcement and then the video I genuinely didn’t know. I was so perplexed I commented, “How do I know if this is real?” and people don’t know how to answer. Obviously. Neither do I.

Going through an internet world where we can’t tell if something is legit or fake (not fake in a sense of AI generated, but in a sense of the content being straight up not legit) feels horrible. I couldn’t be less interested in watching a Musk interview if I know that anybody could’ve created (faked) it. This situation might eventually seem obvious one day, the way it happened with text, but right now, it’s not.

Final thoughts

This leaves me with four thoughts:

  1. We need a way to prove what’s real. Not what’s fake, but what’s real. Invisible watermarking AI-generated images approached this the wrong way. Even if every major company agreed to mark their media, there would still be underground unmarked content. We need a way to prove reality, not surreality. And we will figure this out. Mark my words.
  2. There’s a reason OpenAI launched Sora as a social media-themed app, almost like a meme-generator. They want people to see AI images and videos as fun, playful, harmless tools and not these transformative things they actually are. Understandably.
  3. All of this artificial stuff will push humaneness forwards. True interactions, discussions and meetups with fellow people have never been more important. I wrote about this a long time ago, and my stance hasn’t changed a bit.
  4. I simply love technology.

Oh yeah and if you haven’t seen it, here’s a quick impression: