don't believe your eyes

Photorealistic, AI-Generated Videos Are Here, And We Should All Be Worried

Photorealistic, AI-Generated Videos Are Here, And We Should All Be Worried
With technology like OpenAI's newly announced text-to-video tool, deception has never been easier.
· 20.4k reads ·
· ·

US artificial intelligence start-up OpenAI has just revealed a new tool that instantly generates convincing, photorealistic videos based on text prompts.

The technology, named Sora (that's Japanese for "sky"), is the third AI generative tool to be released by the company, after the Dall-E image generator in 2021 and the ChatGPT bot in 2022.

Sora, OpenAI says, is capable of generating videos "up to 60 seconds featuring highly detailed scenes, complex camera motion and multiple characters with vibrant emotions" from text descriptions alone.



In a thread posted to X, OpenAI said they were taking "important safety steps" before making Sora available, including letting "red teamers" — experts working in misinformation, hateful content and bias — test the tool out first to see how it could be misused. That doesn't mean we shouldn't still be worried about what a world with technology like Sora would look like.

The possibilities of this highly sophisticated tool are pretty much endless — and that's precisely the problem. As X user and augmented reality engineer Denis Rossiev pointed out, the ability to instantly create video from text will only fuel the proliferation of evils like deepfake porn, war-time misinformation and fake news.

"Videos today are already hard to distinguish from reality, and soon it'll be outright impossible, even with algorithms or other neural networks," Rossiev wrote.

Oren Etzioni, founder of TrueMedia, an organization dedicated to fighting AI-based disinformation in political campaigns, says disinformation spread via social media is the "Achilles heel of democracy in the 21st Century."

"OpenAI, which is locked into an epic battle for supremacy with Google and others, is rapidly releasing powerful tools like Sora," Etzioni tells Digg. "I'm concerned that our democracy will suffer injuries in this battle."

And it looks like Etzioni isn't alone in his concerns — just this week, major tech groups including Microsoft, Meta, Google, X and OpenAI all pledged to take action to prevent deceptive, AI-generated content from misleading voters and disrupting the upcoming presidential election.



The technology is also unwelcome news for those working in entertainment. While film and TV writers have long feared having their jobs replaced by AI — it was a key issue in last year's Hollywood strikes — the futures of actors, artists and other creatives are now also looking more uncertain than ever.



Excessive scaremongering is, of course, unhelpful, but there's already ample evidence that we should be wary of AI's potential for harm. Sure, OpenAI are taking "safety steps" early on, but creators of most generative tools out there likely take similar measures — and, as we've seen countless times, there's little a company can do to stop people using them for nefarious means.

OpenAI might own the technology, but there's only so much control they have over what people put in and get out of it.



[Image credit: OpenAI/Sam Altman]

Comments


Cut Through The Chaos With Digg Edition

Sign up for Digg's daily morning newsletter to get the most interesting stories. Sent every morning.