OpenAI shows a “reckless disregard” for product safety, people’s rights to their own likeness, and even the stability of democracy, according to a letter written by watchdog Public Citizen. The Sora video platform is typically used for short-form social media content, creating clips entertaining enough to be liked and shared. A popular theme is fake doorbell camera footage, where something slightly unusual takes place while remaining somewhat believable – like a kangaroo coming to someone’s door, or a loosely entertaining street scene unfolding – but the software could quickly become a danger to us all.
Public Citizen just urged OpenAI to pull Sora from public use, calling it a reckless release that supercharges deepfakes, identity theft, and election misinformation. Is it as bad as they say?

The Danger of Deepfakes
Public Citizen wrote a letter to OpenAI and to US Congress demanding Sora to be taken offline until robust, testable guardrails can be implemented. Their claim is that the app was launched irresponsibly early to gain competitive advantage without considering the necessary safety measures. Non-consensual likeness usage and widespread deception are the key risks here, with synthetic videos spreading faster than the public can verify their authenticity.
Public Citizen tech policy advocate, JB Branch – who authored the letter – says “our biggest concern is the potential threat to democracy. I think we’re entering a world in which people can’t really trust what they see. And we’re starting to see strategies in politics where the first image or first video that gets released, is what people remember”.
Branch continued, “they’re putting the pedal to the floor without regard for harms. Much of this seems foreseeable. But they’d rather get a product out there, get people downloading it, get people who are addicted to it rather than doing the right thing and stress-testing these things beforehand and worrying about the plight of everyday users”.
OpenAI Caused Suicides, Lawsuit Claims
Seven new lawsuits filed last week in California courts claim the chatbot drove people to suicide and harmful delusions, even when they had no prior mental health issues. On behalf of six adults and one teenager, the Social Media Victims Law Center and Tech Justice Law Project filed lawsuits claiming OpenAI knowingly released its GPT-4o iteration prematurely last year, despite internal warnings that it was dangerously sycophantic and psychologically manipulative. Four of the victims died by suicide.
Public Citizen is not involved in the lawsuits, but echoes the concerns raised in them. Branch says that OpenAI blocks nudity, but still “women are seeing themselves being harassed online” in other ways. A 404 Media report last week found a flood of Sora-made videos of women being strangled.
What Sora AI Videos Are Used For
Sora makes it possible for everyone to produce cinematic fakes, whether fictional doorbell scenes, lip-syncing celebrities, or photorealistic mini dramas that look like eyewitness footage. These clips are designed to be funny, shareable and uncanny –ultimately for dopamine. But when shared to online platforms, context disappears, and many people digest them as though they’re real.
The Washington Post tested the theory. They uploaded a Sora-made fake video to eight major platforms with tamper-resistant content credentials embedded. Only one platform – YouTube – identified and disclosed its artificial nature, but buried the label in the description. If platforms can’t, or won’t, carry clear signals about what’s real and what’s fake, then there’s no way to protect viewers from misleading content.
Where It Could Go Wrong – And Already Is
- Elections: A handful of realistic fakes, such as fabricated police shootings, counterfeit candidate confessions, and forged foreign-policy reports, can swing voter turnout and ignite unrest way before fact checks can be put in place. Public Citizen warns of Sora’s impact on “the stability of democracy”.
- Harassment and extortion: Non-consensual sexual deepfakes, reputational smears, and general extortion campaigns are already a problem with still images – video multiplies the harm
- Public safety hoaxes: Fake disaster clips or emergency alerts spread faster than corrections, confusing first responders and the public. Sora’s uncanny realism in every day context is unmatched, and telling the difference is harder than ever
- Economic fraud: Synthetic videos of CEOs, public figures, or high-profile influencers can create classic voice or email imposter scams, pushing employees or private individuals to wire funds. Voice cloning is already tricking banks – imagine the power of faking videos, too.
What OpenAI Says It’s Doing to Address Dangers
OpenAI has started limiting public-figure cameos and rolled out user controls over “AI selves” where a personal avatar might appear. On one hand, they are clearly acknowledging the risk. But on the other, they do not effectively address the full problem. They said that “over-moderation is super frustrating” for new users, and that it’s important to be conservative “while the world is still adjusting to this new technology.”
They publicly announced agreements with Martin Luther King Jr’s family in October, preventing “disrespectful depictions” of the civil rights leader while the company worked on better safeguards. OpenAI also announced the same deal with famous actors, the SAG-AFTRA union, and talent agencies.
“That’s all well and good if you’re famous,” Branch said, but highlighted another problem. “It’s sort of just a pattern that OpenAI has where they’re willing to respond to the outrage of a very small population. They’re willing to release something and apologise afterwards. But a lot of these issues are design choices they can make before releasing”.
The Bigger AI Picture
Public Citizen’s letter lands amid the greater realisation that we’re building platforms that make deception easier than ever, rather than creating systems to protect against it. Sora videos can be engaging yet indistinguishable at scrolling-speed, and without credible labels highlighting the synthetic content for users, we’re simply allowing the public to get used to fake content, ultimately becoming unable to tell the difference.
OpenAI didn’t invent deepfakes, and competitors will race to match it. True leadership would require slowing down until the safety rails are in place, rather than moving so quickly that we can never build in the right protection. In the meantime, watchdogs will continue to demand its removal from the public domain until ordinary people can make the distinction between fact and fiction.
Final Thought
Do you think you could accurately tell the difference between the latest AI videos and reality? What’s the biggest risk with this technology being available to everyone? What can we do about it? Share your thoughts below.
The Expose Urgently Needs Your Help…
Can you please help to keep the lights on with The Expose’s honest, reliable, powerful and truthful journalism?
Your Government & Big Tech organisations
try to silence & shut down The Expose.
So we need your help to ensure
we can continue to bring you the
facts the mainstream refuses to.
The government does not fund us
to publish lies and propaganda on their
behalf like the Mainstream Media.
Instead, we rely solely on your support. So
please support us in our efforts to bring
you honest, reliable, investigative journalism
today. It’s secure, quick and easy.
Please choose your preferred method below to show your support.
Categories: Did You Know?
I’ve always said that AI should be destroyed. Better still, never developed. Too late now. The cat is out of the bag, now you know we have been sold a pig in a poke. Might as well get rid of cctv because no digital footage is valid anymore. Digital ID? No way, far too risky. Humanity is finding out that AI is a worse invention than the atomic bomb. Scientists need to be locked away from the rest of society. They are easily paid off…climate change hoax, covid hoax, and do lots of damage….
Yes I agree AI in it’s present form has to go ! It is very dangerous, especially for children !