Cristóbal Valenzuela, the CEO and Co-Founder of Runway, has told us about the company's working organization and spoke about Runway's text-to-video, text-to-3D texture, and AI Magic Tools models.
Hi everyone, thank you for having me! My name is Cristóbal Valenzuela, and I am the CEO and Co-Founder of Runway. I’m originally from Chile, but now I live and work in NYC. I have a mixed background in art, design, economics, and computer science. Most recently, I worked as a researcher at NYU, I contribute to open-source software projects like ml5.js, and for the last four years, I’ve been focused on building the future of creativity at Runway.
When I was at NYU, I was always surrounded by artists, and so I naturally started building tools for them. Since the origins of Runway, we have built tools to facilitate the exploration of creative AI by making AI intuitive and more easy to use. The vision of Runway has always been to move creativity forward by finding new expressive tools, augmenting workflow in radically new ways.
We have spent a lot of time building tools in different mediums: image, video, and sound. With Runway, not only can you use AI to streamline editing workflows, but you can use AI to generate the content itself. Runway now serves the entire content production process from generation all the way through the editing and post-production process. We save people time (and, therefore, money), but more importantly, we are democratizing content creations and storytelling at large and opening the door to a whole new generation of creators to use the software in a more intuitive, collaborative, and powerful way.
We are a full-stack AI research company made up of 34 incredible individuals across engineering, product, design, marketing, and research. We have a world-class team, and that’s because we emphasize and embrace different and unconventional backgrounds in our hiring process. In order to invent something completely new, you need as many perspectives as possible.
It’s important that everyone on our team feels like they have a seat at the table and can voice ideas, so we have a fairly horizontal team structure with a singular team focus of building amazing products; it takes all of us to build what we’re building.
We like to think everyone at Runway is a top talent, but one team that differentiates us as a company is our applied research team. We are not only building new products and tools at Runway using emerging technologies, but we’re also powering the research and advancing those emerging technologies at their cores. We have led the applied research efforts behind Stable Diffusion and continue to contribute to helping push the research field forward with new techniques and models for content generation.
Runway's Text-to-Video Model
We can’t wait to share text-to-video with everyone. We firmly believe this will be the most important user interface of the next decade because to date, the world of content creation and editing has been locked behind incredibly technical and complex tools. Tools that take a lot of time or a lot of money (or both) to learn. Even if you know how to use these tools, they still take a lot of time to use. Creative tools should be available to everyone, which we are trying to help unlock with text-to-video and using natural language to power content. Anyone with an idea will be able to create.
The amount of interest in text-to-video has been incredible and humbling to see. We are slowly opening up early access now and will be opening it up for everyone to use very soon.
Models applied to video are fundamentally more complicated than models applied to still images, and that’s largely due to challenges around temporal consistency. In still images, if you edit an element, it doesn’t then move elsewhere. In the video, we have to make sure everything moves consistently across every single frame, and it can’t just be "good enough", it has to be precise. Building up that precision and consistency is an ongoing challenge.
AI Magic Tools
The invention and creation of new tools are powered by our amazing team, who integrate themselves deeply into our product and into our users’ workflows. Research scientists work really closely with engineering and product to craft tools that are useful for video editors, filmmakers, artists, and creators at large. We are able to anticipate and problem-solve in really creative ways, and they’ve been doing that every day for the last four years. You can say we’ve effectively been training for four years to be able to ship AI products weekly now.
The Text-to-3D Texture Model
We spend a lot of time talking to our users, both current and prospective. This helps us keep our finger on the pulse of what creators are working on, and what tools they need. If their needs are something we think we can build, we build it. We also have a really incredible in-house creative and video team who helps us understand user flows and workflows to build products that are practical and useful. We were excited to build a tool that would benefit the 3D creator community and will continue to listen to them to bring them more useful tools in that arena.
Our goal at Runway is to build the next Creative Suite, the Generative Suite. Making professional content should be fast and easy on the editing front, but also on the creation front. We believe we will be able to completely generate and edit a film in the very near future, with the b-roll, actors, music, voices, and effects all generated. These capabilities within a new Generative Suite will enable a new class of filmmaking and storytelling possibilities, and if you pair that with the automation tools we already have, it will revolutionize the creative industry forever. That is where we believe the future of Runway lives.
We try to create products that unlock as much flexibility for our users as possible. We want them to be able to create exactly what they want, and in this case, where they want. We focus on outcomes rather than processes or tools at Runway, so we try to keep those outcomes as flexible as possible. With that said, we are always looking for new partners to work with, and love being able to create valuable tools with other communities.
We are just scratching the surface of what we’re building, and that includes Text-to-Video, which is coming very soon. We are quite literally shipping new magic and new products weekly, and there is so much more to come! The best place to follow along is in-product, but you can also keep up on Twitter, Instagram, and Discord. We are also always looking for new talent to join us across all team functions on our careers page.