When Deepfakes Go Mainstream: OpenAI’s Sora App Becomes a Scammer Playground
I was scrolling through my feed the other night when I stumbled upon a short clip of a friend speaking fluent Japanese at an airport.
The only problem? My friend doesn’t know a single word of Japanese.
That’s when I realized it wasn’t him at all — it was AI. More specifically, it looked suspiciously like something made with Sora, the new video app that’s been stirring up a storm.
According to a recent report, Sora is already becoming a dream tool for scammers. The app can generate eerily realistic videos and, more worryingly, remove the watermark that usually marks content as AI-generated.
Experts are warning that it’s opening the door to deepfake scams, misinformation, and impersonation on a level we’ve never seen before.
And honestly, watching how fast these tools are evolving, it’s hard not to feel a bit uneasy.
What’s wild is how Sora’s “cameo” feature lets people upload their faces to appear in AI videos.
It sounds fun — until you realize someone could technically use your likeness in a fake news clip or a compromising scene before you even find out.
Reports have shown that users have already seen themselves doing or saying things they never did, leaving them confused, angry, and in some cases, publicly embarrassed.
While OpenAI insists it’s working to add new safeguards, like letting users control how their digital doubles appear, the so-called “guardrails” seem to be slipping.
Some have already spotted violent and racist imagery created through the app, suggesting that filters aren’t catching everything they should.
Critics say this isn’t about one company — it’s about the larger problem of how fast we’re normalizing synthetic media.
Still, there are hints of progress. OpenAI has reportedly been testing tighter settings, giving people better control over how their AI selves are used.
In some cases, users can even block appearances in political or explicit content, as noted when Sora added new identity controls. It’s a step forward, sure — but whether it’s enough to stop misuse remains anyone’s guess.
The bigger question here is what happens when the line between reality and fiction completely blurs.
As one tech columnist put it in a piece about how Sora is making it nearly impossible to tell what’s real anymore, this isn’t just a creative revolution — it’s a credibility crisis.
Imagine a future where every video could be questioned, every confession could be dismissed as “AI,” and every scam looks legit enough to fool your own mother.
In my view, we’re in the middle of a digital trust collapse. The answer isn’t to ban these tools — it’s to outsmart them.
We need stronger detection tech, transparency laws that actually stick, and a bit of old-fashioned skepticism every time we hit play.
Because whether it’s Sora, or the next flashy AI app that comes after it, we’re going to need sharper eyes — and thicker skin — to tell what’s real in a world that’s learning how to fake everything.
Origianl Creator: Mark Borg
Original Link: https://ai2people.com/when-deepfakes-go-mainstream-openais-sora-app-becomes-a-scammer-playground/
Originally Posted: Wed, 08 Oct 2025 11:25:38 +0000
What do you think?
It is nice to know your opinion. Leave a comment.