[ad_1]
Sora, OpenAI’s video generator, will launch today — at least to some users.
YouTube user Marquis Brownlee revealed the news in a video clip he posted on his channel on Monday morning. Brownlee got early access to Sora, and provided his initial impressions in a 15-minute review.
Sora lives Sora.comThe home page features a collection of recently created Sora videos curated by OpenAI, Brownlee said. (It has not been published here at TechCrunch as of press time.) Notably, the tool is not built into ChatGPT, OpenAI’s AI-powered chatbot platform. It appears to be its own separate product at the moment.
Videos on the Sora home page can be bookmarked for later viewing in the Saved tab, organized into folders, and clicked on to see which text prompts were used to create them. Sora can create videos from uploaded photos as well as prompts, according to Brownlee, and edit existing videos.
Using the Remix feature, users can describe the changes they want to see in their video and Sora will attempt to incorporate them into a newly created clip. Re-mix has a “strength” setting that lets users decide how radically they want Sora to change the target video, with higher values producing videos that take more freedom.
Sora can produce footage at up to 1080p resolution, but the higher the resolution, the longer it takes to create videos, Brownlee says. 1080p footage takes 8 times longer than 480p, which is the faster option, while 720p footage takes 4 times longer.
Brownlee said the average 1080p video took “a few minutes” to produce in his testing. “This is also, at the moment, when almost no one else is using it,” he said. “I kind of wonder how long it will take when this is open for anyone to use.”
In addition to creating one-time clips, Sora has a “Storyboard” feature that allows users to group prompts together to create a scene, Brownlee says. This is intended to help with consistency, which is a notorious weakness of video generators.
But how is Sora performing? Well, Brownlee says, it suffers from the same drawbacks as other tools: things that pass in front of or behind each other in ways that don’t make sense, or that disappear and reappear for no reason.
Legs are another major source of Sora’s problems, Brownlee says. Any time a person or two-legged animal has to walk for a long time in the clip, Sora will confuse the front legs with the back legs. The legs will “swap” back and forth in a way that is anatomically impossible.

Brownlee says Sora has a number of built-in safeguards, preventing creators from creating footage that shows people under 18, contains violence or “explicit themes,” under a copyright owned by another party. Sora also won’t create videos from photos with generic characters, recognizable characters, or logos, and will watermark each video — albeit with a visible watermark that can be easily cropped, Brownlee says.
So, what’s the point of Sora? Brownlee found it useful for things like stylized title slides, animations, recaps, and stop-motion shots. But he wouldn’t support it for anything realistic.
“It’s impressive that it’s an AI-generated video, but you can tell pretty quickly that it’s an AI-generated video,” he said of the majority of Sora’s clips. “Things are getting really wonky.”
[ad_2]