Did You Know?

OpenAI Must Withdraw Sora: How Deepfake AI Videos Endanger Us All

Please share our story!

OpenAI shows a “reckless disregard” for product safety, people’s rights to their own likeness, and even the stability of democracy, according to a letter written by watchdog Public Citizen. The Sora video platform is typically used for short-form social media content, creating clips entertaining enough to be liked and shared. A popular theme is fake doorbell camera footage, where something slightly unusual takes place while remaining somewhat believable – like a kangaroo coming to someone’s door, or a loosely entertaining street scene unfolding – but the software could quickly become a danger to us all. 

Public Citizen just urged OpenAI to pull Sora from public use, calling it a reckless release that supercharges deepfakes, identity theft, and election misinformation. Is it as bad as they say? 

OpenAI shows a “reckless disregard” for product safety, people’s rights to their own likeness, the stability of democracy, and has even led to suicides, with its Sora AI video software according to Public Citizen

The Danger of Deepfakes

Public Citizen wrote a letter to OpenAI and to US Congress demanding Sora to be taken offline until robust, testable guardrails can be implemented. Their claim is that the app was launched irresponsibly early to gain competitive advantage without considering the necessary safety measures. Non-consensual likeness usage and widespread deception are the key risks here, with synthetic videos spreading faster than the public can verify their authenticity.  

Public Citizen tech policy advocate, JB Branch – who authored the letter – says “our biggest concern is the potential threat to democracy. I think we’re entering a world in which people can’t really trust what they see. And we’re starting to see strategies in politics where the first image or first video that gets released, is what people remember”. 

Branch continued, “they’re putting the pedal to the floor without regard for harms. Much of this seems foreseeable. But they’d rather get a product out there, get people downloading it, get people who are addicted to it rather than doing the right thing and stress-testing these things beforehand and worrying about the plight of everyday users”. 

OpenAI Caused Suicides, Lawsuit Claims

Seven new lawsuits filed last week in California courts claim the chatbot drove people to suicide and harmful delusions, even when they had no prior mental health issues. On behalf of six adults and one teenager, the Social Media Victims Law Center and Tech Justice Law Project filed lawsuits claiming OpenAI knowingly released its GPT-4o iteration prematurely last year, despite internal warnings that it was dangerously sycophantic and psychologically manipulative. Four of the victims died by suicide.  

Public Citizen is not involved in the lawsuits, but echoes the concerns raised in them. Branch says that OpenAI blocks nudity, but still “women are seeing themselves being harassed online” in other ways. A 404 Media report last week found a flood of Sora-made videos of women being strangled.  

What Sora AI Videos Are Used For

Sora makes it possible for everyone to produce cinematic fakes, whether fictional doorbell scenes, lip-syncing celebrities, or photorealistic mini dramas that look like eyewitness footage. These clips are designed to be funny, shareable and uncanny –ultimately for dopamine. But when shared to online platforms, context disappears, and many people digest them as though they’re real.  

The Washington Post tested the theory. They uploaded a Sora-made fake video to eight major platforms with tamper-resistant content credentials embedded. Only one platform – YouTube – identified and disclosed its artificial nature, but buried the label in the description. If platforms can’t, or won’t, carry clear signals about what’s real and what’s fake, then there’s no way to protect viewers from misleading content. 

Where It Could Go Wrong – And Already Is

  • Elections: A handful of realistic fakes, such as fabricated police shootings, counterfeit candidate confessions, and forged foreign-policy reports, can swing voter turnout and ignite unrest way before fact checks can be put in place. Public Citizen warns of Sora’s impact on “the stability of democracy”. 
  • Harassment and extortion: Non-consensual sexual deepfakes, reputational smears, and general extortion campaigns are already a problem with still images – video multiplies the harm  
  • Public safety hoaxes: Fake disaster clips or emergency alerts spread faster than corrections, confusing first responders and the public. Sora’s uncanny realism in every day context is unmatched, and telling the difference is harder than ever 
  • Economic fraud: Synthetic videos of CEOs, public figures, or high-profile influencers can create classic voice or email imposter scams, pushing employees or private individuals to wire funds. Voice cloning is already tricking banks – imagine the power of faking videos, too. 

What OpenAI Says It’s Doing to Address Dangers

OpenAI has started limiting public-figure cameos and rolled out user controls over “AI selves” where a personal avatar might appear. On one hand, they are clearly acknowledging the risk. But on the other, they do not effectively address the full problem. They said that “over-moderation is super frustrating” for new users, and that it’s important to be conservative “while the world is still adjusting to this new technology.” 

They publicly announced agreements with Martin Luther King Jr’s family in October, preventing “disrespectful depictions” of the civil rights leader while the company worked on better safeguards. OpenAI also announced the same deal with famous actors, the SAG-AFTRA union, and talent agencies. 

“That’s all well and good if you’re famous,” Branch said, but highlighted another problem. “It’s sort of just a pattern that OpenAI has where they’re willing to respond to the outrage of a very small population. They’re willing to release something and apologise afterwards. But a lot of these issues are design choices they can make before releasing”. 

The Bigger AI Picture

Public Citizen’s letter lands amid the greater realisation that we’re building platforms that make deception easier than ever, rather than creating systems to protect against it. Sora videos can be engaging yet indistinguishable at scrolling-speed, and without credible labels highlighting the synthetic content for users, we’re simply allowing the public to get used to fake content, ultimately becoming unable to tell the difference.  

OpenAI didn’t invent deepfakes, and competitors will race to match it. True leadership would require slowing down until the safety rails are in place, rather than moving so quickly that we can never build in the right protection. In the meantime, watchdogs will continue to demand its removal from the public domain until ordinary people can make the distinction between fact and fiction. 

Final Thought

Do you think you could accurately tell the difference between the latest AI videos and reality? What’s the biggest risk with this technology being available to everyone? What can we do about it? Share your thoughts below. 

Your Government & Big Tech organisations
try to silence & shut down The Expose.

So we need your help to ensure
we can continue to bring you the
facts the mainstream refuses to.

The government does not fund us
to publish lies and propaganda on their
behalf like the Mainstream Media.

Instead, we rely solely on your support. So
please support us in our efforts to bring
you honest, reliable, investigative journalism
today. It’s secure, quick and easy.

Please choose your preferred method below to show your support.

Stay Updated!

Stay connected with News updates by Email

Loading


Please share our story!
author avatar
g.calder
I’m George Calder — a lifelong truth-seeker, data enthusiast, and unapologetic question-asker. I’ve spent the better part of two decades digging through documents, decoding statistics, and challenging narratives that don’t hold up under scrutiny. My writing isn’t about opinion — it’s about evidence, logic, and clarity. If it can’t be backed up, it doesn’t belong in the story. Before joining Expose News, I worked in academic research and policy analysis, which taught me one thing: the truth is rarely loud, but it’s always there — if you know where to look. I write because the public deserves more than headlines. You deserve context, transparency, and the freedom to think critically. Whether I’m unpacking a government report, analysing medical data, or exposing media bias, my goal is simple: cut through the noise and deliver the facts. When I’m not writing, you’ll find me hiking, reading obscure history books, or experimenting with recipes that never quite turn out right.

Categories: Did You Know?

Tagged as: , , , ,

4.5 2 votes
Article Rating
Subscribe
Notify of
guest
10 Comments
Inline Feedbacks
View all comments
Reverend Scott
Reverend Scott
20 days ago

I’ve always said that AI should be destroyed. Better still, never developed. Too late now. The cat is out of the bag, now you know we have been sold a pig in a poke. Might as well get rid of cctv because no digital footage is valid anymore. Digital ID? No way, far too risky. Humanity is finding out that AI is a worse invention than the atomic bomb. Scientists need to be locked away from the rest of society. They are easily paid off…climate change hoax, covid hoax, and do lots of damage….

Robert Skappel
Robert Skappel
20 days ago

Yes I agree AI in it’s present form has to go ! It is very dangerous, especially for children !

plebney
plebney
19 days ago

It’s a good thing. AI is relatively new so it will take a little while for the general public to understand that absolutely nothing found on the internet can be trusted. The fake generating AI is helping this along.
Soon most people will know none of it can be trusted. This should always have been known of course but the realization will quickly filter down to even the stupidest of people.
Scan the whole article. The so-called “danger” is not from lying, it’s from people who believe lies. For example even some of the stupidest people now understand that politicans lie to them.
How many people get to say “Gee, I’ve never been lied to in my whole life until I watched an AI video.” The so-called “safety” motivation is never anything more than an attempt to censor and control.
Some nutcase goes on a shooting spree because the voices in his head tell him to kill people? Voices in the head are a danger to everyone! Lets outlaw voices in people’s heads!

jim peden
jim peden
Reply to  g.calder
19 days ago

Good article! It’s not just Sora of course – there are now many out there. I used Google’s Veo3 to make a short movie (my first attempt and it wasn’t easy! – https://panocracy.substack.com/p/panocracy-the-movie-introduction)

I think that people will view any broadcast information through the lens of their own biases. If you’re a supporter of Mr Trump then you’ll be sceptical of any negative press he receives and accepting if it’s positive. Equally so for Mr Starmer, etc.

The ongoing situation with the BBC doctoring footage of Trump for its Panorama and Newsnight programmes shows (if it was necessary) that mis/disinformation is not restricted to AI.

I’ve found that it’s those in the professional classes who are most susceptible to misinformation. The working classes tend to dismiss almost everything as a management ploy! Those whose livelihoods depend on the written and spoken word tend to be much more gullible.

kelman
kelman
19 days ago

Long before this AI came to the fore successive governments were regularly indulging in all kinds of deception.
There is no accountability either it appears.

Joy N.
Joy N.
19 days ago

🙏🙏
What the Holy Bible says of this horrific decade just ahead of us.. Here’s a site expounding current global events in the light of bible prophecy.. To understand more, pls visit 👇 https://bibleprophecyinaction.blogspot.com/

A Yousleh Zeeter
A Yousleh Zeeter
19 days ago

My son has an app that he uses innocently…He’s really into the weather at the moment, especially tornadoes. He’s created videos by filming outside, then generating tornadoes that obliterate local houses, leaving rubble. They actually look quite realistic!…The downside is if it’s used for nefarious reasons and who knows if the next false flag will be created this way….We already had 9/11 and 7/7 long before this technology!…

Islander
Islander
18 days ago

“Stability of democracy”???

Democracy is the ultimate emblem of instability- a one legged man is more stable!

Surely, by now, your readers can see that democracy has been “found wanting”?

Novak
Novak
16 days ago

Vše, jak se zdá, už tady bylo řečeno. Co dál? Nedělejme si iluze. Džiň (Kraken) byl vypuštěn. Tohle už nikdo nezastaví. Poslední zhasne. Přeji hezký den.

Poslední zhasne.