RunwayML Introduces Act-One Feature: A New Way to Generate Expressive Character Performances Using Simple Video Inputs.
Runway has announced a new feature called Act-One. One popular reason why Hollywood movies are so expensive is because of motion capturing, animations, and CGIs. A huge chunk of any movie these days goes toward the post-production. However, Hollywood and most people don’t realize there is no need for a massive budget anymore to create […] The post RunwayML Introduces Act-One Feature: A New Way to Generate Expressive Character Performances Using Simple Video Inputs. appeared first on MarkTechPost.
Runway has announced a new feature called Act-One. One popular reason why Hollywood movies are so expensive is because of motion capturing, animations, and CGIs. A huge chunk of any movie these days goes toward the post-production. However, Hollywood and most people don’t realize there is no need for a massive budget anymore to create compelling movies. AI video generators have progressed so much in recent times since the big announcement of Sora by OpenAI. Sora, however, is not in the mix as of right now, and Runway is carrying the AI video generator boat.
Over recent times, Runway has announced new AI video-generation models and features that were once considered impossible without expensive equipment. The AI video generator truly democratized Hollywood-level movie production to common people like you and me. And Runway’s new Act-One is proof of that. Runway Act-One is a new way to generate expressive character performances using simple video inputs. You can create compelling animations using video and voice performances as inputs.
How Runway’s Act-One is Different:
Traditionally, we needed motion capturing, multiple footage references, manual face rigging, and other techniques to create an animated movie.
- With Runway’s Act-One, you no longer need any extra equipment, and everything is driven directly and only by an actor’s performance.
- You can also apply this feature to different reference images. The new model can preserve realistic facial expressions and accurately translate performances into characters. That is possible even for characters with a different proposal than the source video.
- Act-One relies more on the actor’s performance than anything else. Hence, it can produce high-quality outputs even from different angles.
- Creators can create life-like characters that deliver genuine emotion and expression for better viewer connection.
- What was once considered impossible is now possible with Runway’s Act-One only using a consumer-grade camera. You can create multi-turn, expressive dialogue scenes where one actor can read and perform different characters from a script.
Conclusion:
This new Act-One feature by Runway is looking strong. There’s no other tool in the market that can do anything remotely similar to an AI video generator. Act One is not yet available for use by the general public but will hopefully launch soon for consumer use. The film industry will change as soon as this new feature is commercially available. I saw someone on X (Twitter) say, “In a couple of years, we are going to have 6-year-olds making movies mostly indistinguishable from Hollywood.” That is not far from the truth. Let’s hope we can use this new AI video generation feature soon.
Check out the Details here. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter.. Don’t Forget to join our 55k+ ML SubReddit.
[Upcoming Live Webinar- Oct 29, 2024] The Best Platform for Serving Fine-Tuned Models: Predibase Inference Engine (Promoted)
The post RunwayML Introduces Act-One Feature: A New Way to Generate Expressive Character Performances Using Simple Video Inputs. appeared first on MarkTechPost.