The new tool allows you to transfer the style of any image to any video, turn mockups into animated renders, isolate and modify parts of your videos, and more.
The developers of Runway, a Web-based machine-learning-powered video editor, have unveiled Gen-1, a brand new video-to-video generative AI that allows its users to generate new videos out of existing ones using text prompts and images. According to the team, the newly-introduced AI is capable of applying the composition and style of an image or a text prompt to the structure of the source video, realistically and consistently synthesizing new videos as a result.
The AI has been shown to have at least five cool modes:
- Mode 1 – Stylization: Transfer the style of any image or prompt to every frame of your video.
- Mode 2 – Storyboard: Turn mockups into fully stylized and animated renders.
- Mode 3 – Mask: Isolate subjects in your video and modify them with simple text prompts.
- Mode 4 – Render: Turn untextured renders into realistic outputs by applying an input image or prompt.
- Mode 5 – Customization: Unleash the full power of Gen-1 by customizing the model for even higher fidelity results.