The Startup that co-created Stable Diffusion, Runway, has broken new ground in the world of generative AI for video with their latest venture, Gen-1. Runway’s new generative AI can create new videos from existing ones with either a text prompt or a reference image. For Runway, this isn’t their first launch into AI-powered video editing software. The startup has been working on developing new video-focused software since 2018. Their work has spanned from films such as Everything Everywhere All at Once and everyday users on YouTube and TikTok.
So what exactly can Gen-1 do? According to Runway’s paper, Gen-1 is a combination of generative adversarial networks and variational autoencoders, which are trained on large video datasets. This allows the model to learn the underlying patterns and relationships between different frames and movements in a video, making it possible to generate video sequences that are realistic and coherent, and consistent with the input frames.
One of the key advantages of Gen-1 is its ability to handle large video datasets, which makes it more versatile and flexible than other generative AI systems that are often limited to specific video types. This opens up a wide range of potential applications, from film and television production to video gaming and advertising. With Runway’s record working on film and television, it gives the startup another advantage in the generative AI video race.
This is because unlike Google’s Phenaki and Meta’s Make-a-Video, Gen-1 can produce longer videos due to its ability to transform existing footage. According to Runway CEO & co-founder Cristóbal Valenzuela, who provided a statement to MIT Technology Review, the creative community is at the heart of this project. “This is one of the first models to be developed really closely with a community of video makers.”
For those who are interested in making videos with Gen-1, you have to request access as it’s currently only been made available to a select group of users. If it’s anything like other generative AI tools we’ve seen recently, the exclusive use of it will end once the startup feels confident that Gen-1 is ready for public use. Finally, Cristóbal Valenzuela has some astonishing goals for Gen-1. “We’re really close to having full feature films being generated,…We’re close to a place where most of the content you’ll see online will be generated.”
If done, it would be an amazing feat and a step forward in the development of generative AI technology. The quest for a fully AI film is already ongoing. As reported in the summer of 2022, a filmmaker is already on a mission to create a fully AI generated feature-length film called Salt.