New PODCAST 🎧 ep99 - What's the camera of the future? Trying out new features on CineD.com Listen or watch now!
LISTEN to PODCAST 🎧 ep99 🎬
What's the camera of the future?
Education for Filmmakers
Language
The CineD Channels
Info
New to CineD?
You are logged in as
We will send you notifications in your browser, every time a new article is published in this category.
You can change which notifications you are subscribed to in your notification settings.
We’ve recently learned that the rather viral “Air Head” short video (by ShyKids) made by Open AI’s Sora video generator wasn’t purely generated. The final version still required some conventional editing to compensate for some of the generative AI shortcomings. This revelation, a slight impotence of the all-powerful AI, generated quite a stir among tech-savvy crowds as well as filmmakers. But does this current shortcoming demonstrated here really change the tides of the generative AI progress vector?
When we were young and naive, we thought artificial intelligence would replace human labor in all the hard, physically demanding, or extremely boring jobs. No longer will we mine coal, lift heavy loads, drive across fields to harvest wheat, or do the dishes. We can focus on reading, writing poems, and being creative with our newfound recreational time. Then came generative AI with the opposite vision – you keep doing the dishes, and we’ll take over creativity. I’m not sure we’ve signed up for this. I’m not sure we were asked to sign.
Sora and other recent AI-based text-to-video generators have been transforming our industry for a while now. The ability to produce ever-improving footage without any dedicated gear (and, in some cases – with no cinematic knowledge) is both exciting and terrifying. But recently, some caveats emerged.
“Air Head” by ShyKids was one of the first clips created by independent creators using early access to Open AI’s Sora. While the creators are independent, the terms of creating the video haven’t been disclosed. While the video’s headline says it was made “with Sora,” it seems like most viewers believed the video wasn’t edited or manipulated with other tools.
Not too long after “Air Head” aired (and went quite viral), additional details began to unravel. In this extended interview, ShyKids’ post-production specialist, Patrick Cederberg, dives deep into the creative process behind the clip. If you’re interested in the ins and outs of generative AI and the workflow surrounding it, I truly recommend reading the entire thing, but the BTS video sums it up nicely.
So yes. Traditional post-production practices, techniques, and effects are used on the AI-generated video. Some perceive this as a victory for the hard-core, traditional editors and post-productionists. The machine can’t replace true human creativity! And there’s truth there, but does it even matter?
As per most generative AI companies, the promises seem to be quite optimistic. You type the prompts, and we’ll do the rest. While this might be a nice sales pitch, it will rarely work as advertised. At least for now. The caveat mentioned here doesn’t look good regarding their campaign, but it doesn’t matter to the creative visual industry. While it would be nice (or horrific) to be able to type a line and get a feature film, we shouldn’t underestimate the current state of generative AI. Sora and other video generators are at their alpha stage (or even pre-alpha) and can already produce trustworthy content. One may be able to spot inconsistencies and outright oddities from time to time, but mostly when specifically looking for it.
CineD and other websites are filled with reviews about game-changing cameras. Revolutionary hybrid cameras changed filmmaking forever. The shift towards large-sensor cine cameras also shifted the field, and even tiny action cameras made an impact. Don’t get me started about Smartphones. None of us expect these video-generating tools to operate without the help of proper editing, grading, or VFX. So why do we expect this kind of full circle from Sora and other video generators?
Don’t let this minor, mostly promotional failure distract you from what’s going on. Even with some necessary post-production, even with frustrating inconsistencies, AI video generators may save your next video. Think about what it can do. The way Adobe Firefly changes Premiere with subtle details may prove extremely influential. Think about the amount of time (and budget) a generated shot can save you, depicting vast landscapes, slow-motion explosions, or just deleting a pesky, unwanted detail.
I honestly don’t know if we’ll ever get to a point when AI can do it all. And if we do, it will still be a long way before that product becomes interesting, funny, surprising, or emotional. I do, however, think that current tools already offer revolutionary features and capabilities that may help some of us while, unfortunately, disadvantaging others.
Are you excited about recent AI progress? Frightened? Enraged? Let us know in the comments.
Δ
Stay current with regular CineD updates about news, reviews, how-to’s and more.
You can unsubscribe at any time via an unsubscribe link included in every newsletter. For further details, see our Privacy Policy
Want regular CineD updates about news, reviews, how-to’s and more?Sign up to our newsletter and we will give you just that.
You can unsubscribe at any time via an unsubscribe link included in every newsletter. The data provided and the newsletter opening statistics will be stored on a personal data basis until you unsubscribe. For further details, see our Privacy Policy
Omri Keren Lapidot started his way long ago, hauling massive SVHS cameras as a young local news assistant. Maybe it was the weight that pushed him towards photography, we'll never know. In recent years he became a content creator, teacher, visual literacy promoter, and above all - a father of (fantastic) four girls. Based in Amsterdam.