ADVERTISEMENT
OpenAI finally rolls out generative AI text-to-video generator Sora to publicOpenAI's latest AI model Sora is capable of churning out 20 seconds of motion pictures in full HD (1080p) quality with just a text prompt.
Rohit KVN
Last Updated IST
<div class="paragraphs"><p>OpenAI launches new Sora Turbo video generator.</p></div>

OpenAI launches new Sora Turbo video generator.

Credit: OpenAI

OpenAI earlier this year in February offered a sneak peek at the company's state-of-the-art video generator Sora. Now, the generative Artificial Intelligence-powered video generator is finally available to the public.

ADVERTISEMENT

OpenAI's latest AI model Sora is capable of churning out motion pictures with just a text prompt.

The first beta model of Sora had some issues. In the sample videos, there were anomalies such as mismatches in the movement of shadows of the people in a scene. But very few could have spotted them.

Now, it is offering an advanced Sora 'Turbo' model. It is said to be more precise and is capable of generating photorealistic videos.

It supports up to 1080p resolution video for close to 20 seconds long, in widescreen, vertical or square aspect ratios.

Users can also merge their videos with Sora's synthetic video to extend, remix, and blend, or generate entirely new content from text.

Sora is available for ChatGPT Plus ($20 approx. Rs 1,950 per month) account owners at no additional cost. They can generate up to 50 videos at 480p resolution or fewer videos at 720p each month.

Sora AI video generator's User Interface.

Credit: OpenAI

And, those with the recently launched ChatGPT Pro plan ($200 approx. Rs 16,937 per month), are eligible for 10x more usage, higher resolutions, and longer durations compared to the Plus plan.

OpenAI said it is working to bring more plans in a wide range of price bands to accommodate different types of users, in early 2025.

Get the latest news on new launches, gadget reviews, apps, cybersecurity, and more on personal technology only on DH Tech.

ADVERTISEMENT
(Published 10 December 2024, 12:56 IST)