OpenAI Launches Sora: AI-Generated Video App Reshapes Social Media Reality

OpenAI has released its Sora app, a TikTok-style platform populated entirely with AI-generated short-form videos created from simple text prompts. The app, which follows Meta’s similar release by just days, enables users to produce highly realistic 10-second videos of virtually anything they can imagine, prompting researchers to warn that “we might be in the era where seeing is not believing.”

Context and Background

The Sora 2 app mirrors the vertical video format of established platforms like TikTok, featuring mood-based video selection and granular privacy controls over facial likeness usage. Users can permit their faces to be used by everyone, a limited circle, or only themselves, whilst retaining the ability to remove videos containing their likeness at any time. OpenAI has implemented watermarks and metadata to identify AI-generated content, alongside guardrails prohibiting deceptive content and impersonation.

However, early testing by NPR revealed significant moderation gaps. The platform successfully generated videos supporting conspiracy theories, including fabricated footage of President Nixon declaring the moon landing fake, and depicted violence including drone attacks on infrastructure. The app also produced content featuring copyrighted characters from major entertainment brands, with OpenAI stating it will work with rights holders to block characters upon request.

Looking Forward

The proliferation of AI-generated content tools raises fundamental questions about collective reality and trust in digital media. Whilst researchers note that earlier deepfake concerns haven’t materialised into widespread societal decay, the combination of video, audio, and image generation capabilities presents unprecedented challenges. Solomon Messing from New York University’s Centre for Social Media and Politics emphasised the technology’s potential for misuse, particularly in creating realistic videos of individuals saying things they never said.

Industry observers warn against nihilistic acceptance that authenticity no longer matters online. As Henry Ajder of Latent-Space Advisory noted, society must resist the pull of believing “we can’t tell what’s real anymore, and therefore it doesn’t matter anymore.” OpenAI currently faces copyright litigation from The New York Times over its ChatGPT model, adding legal complexity to the company’s latest release.

Source Attribution:

Share this article