Runway Adds Expressive AI feature
Runway has added a new way to generate expressive character performances using simple driving video inputs.
Runway just launched Act-One, a Gen-3 Alpha feature that lets creators map human facial expressions onto AI-generated characters using only a video and reference image - no specialised equipment needed!
Key points:
Captures nuanced expressions, like micro-expressions and eye movements, even just using a smartphone video for driving it.
Allows creators to transfer performances across multiple AI characters with different styles and angles.
Integrates with Runway’s Gen-3 Alpha model for creating narrative scenes.
Follows Runway's partnership with Lionsgate for custom AI models using their film catalog.
Why it matters for marketers: Runway is democratising high-quality character animation, making it accessible to anyone without the need for expensive equipment or expertise in animation, making it feasible even for marketers with limited budgets to create professional, expressive content of a much wider variety.
Sign up for our next ‘AI Live’ event
Thursday, Nov 7, 2024
Each week we host a Live Generative AI podcast on LinkedIn where we talk about the latest AI news as well as answer your burning questions! SIGN UP HERE.