OpenAI's latest AI videos prove why it was smart to limit access to Sora ahead of the US election

Niceaunties video made using OpenAI Sora technology
A new video shared by OpenAI showcases the company's Sora model. (Image credit: Niceaunties)

What you need to know

  • OpenAI shared two videos that showcase its Sora model.
  • Sora can follow text prompts to create realistic videos.
  • While the videos showcase the potential of Sora, they include several awkward moments and artifacts that look unrealistic.
  • OpenAI limited access to Sora to a shortlist of creators in order to study potential risks of the technology.

OpenAI introduced its Sora model earlier this year. The AI tool can be used to generate mostly realistic videos by having it follow text prompts. While there are still some obvious issues in content created with Sora, many aspects of generated videos appear convincing. To showcase advances in its technology, OpenAI shared two "Sora Showcase" videos on YouTube.

Both videos are made by professional creators with the intent of showing the potential of Sora. One video was created by Singaporean artist Niceaunties, while the other is from British Korean artist David Sheldrick.

Niceaunties · Sora Showcase - YouTube Niceaunties · Sora Showcase - YouTube
Watch On

Art is subjective, so I don't think it's worth my analysis on the content of the video. I'm not an artist, so me discussing the deeper meaning behind content provides little value. Instead, I'll talk about the realism of the video. Many still images from the video by Niceaunties are impressive, especially at first glance. Objects that aren't human, such as eggs, clocks, and cooking utensils look lifelike in several shots.

As is often the case, humans and motion are more challenging to generate. Some of the unrealistic people in the video are likely a stylistic choice, but I don't think all of the awkward clips can be chalked up to creative decisions.

David Sheldrick · Sora Showcase - YouTube David Sheldrick · Sora Showcase - YouTube
Watch On

The video by Sheldrick follows a similar trend, in that still images or clips with little movement look much more realistic than extended videos of people in motion. Arms and hands look particularly unrealistic at several points in the video. Again, I doubt all the artifacts are creative decisions.

Sora is still relatively new, so expecting perfection would be unfair. The fact that the clips have this much motion shows that creators are willing to push the limits of AI. With more training for AI models and more work from creators to refine prompts, I think it's very likely videos like this will appear lifelike in the near future.

Limiting AI access

Generative AI, while impressive, also causes concerns for many. As the technology advances, it may replace jobs. There are also environmental concerns surrounding AI due to how much water and electricity it uses. But even in a hypothetical world where all of those concerns were met to satisfaction, AI still raises ethical questions.

Just a few that come to mind are:

Even when running in ideal circumstances, AI in its current form is prone to hallucinations and sharing inaccurate responses. Google's AI said that Google was dead, recommended eating rocks, and advocated committing suicide when depressed. That was back in May when the technology was fairly new, but it was still in the hands of general consumers. My main points are that AI output needs to be put in context and that some limits are essential for the long-term health of AI.

With the US Election making headlines, I'm grateful that models like Sora are only the hands of certain users and that we're a few years away from anyone being able to make convincing fake videos with a few mouse clicks. I do think that day is on the horizon, but I don't think it's quite here yet.

Politics often bring out the worst in people. Misinformation already goes viral on social media, even with false claims that seem clearly inaccurate. It could make a nightmare scenario when the charged emotions of politics combine with the virality of social media and the ability to create convincing fake videos in an instant.

We've already seen AI be used to spread misinformation. Bad actors will be able to do more damage if given more tools and left unchecked. Microsoft has plans to protect people from AI during the US election, but I think laws and protections are often behind criminals and bad actors.

OpenAI was wise to limit access to Sora. More guardrails need to be in place before anyone can use an AI model to create video content. I know there are other models available, but OpenAI deserves some credit for its decisions.

CATEGORIES
Sean Endicott
News Writer and apps editor

Sean Endicott is a tech journalist at Windows Central, specializing in Windows, Microsoft software, AI, and PCs. He's covered major launches, from Windows 10 and 11 to the rise of AI tools like ChatGPT. Sean's journey began with the Lumia 740, leading to strong ties with app developers. Outside writing, he coaches American football, utilizing Microsoft services to manage his team. He studied broadcast journalism at Nottingham Trent University and is active on X @SeanEndicott_ and Threads @sean_endicott_. 

  • jlzimmerman
    Deep fakes should be banned, and maybe they will, but they'll still be made. It will be too good of a tool not to be used by those who want to keep people divisive and confused.

    The videos of Trump and Kamala making out are my favorite. ❤️
    Reply