It takes a text prompt and works from there.
Pika Labs has presented a new feature available for its AI text-to-video platform. Not only does it generate videos from prompts but you can also feed it an image, and the tool will animate it – it's something the company calls image-conditioned video generation.
You can try Pika Labs' platform if you join the beta, which can be done on its website. Those lucky to get access are already sharing some interesting results:
This AI-powered image-to-video tool is not the first out there. Earlier this year, the creators of Dreamix introduced the same feature. But this app is more known for its video editing abilities: you can feed the AI a video, enter a prompt, and it will change the footage into whatever you want.
We have seen Pika Labs' main concept, text-to-video, before. NVIDIA showed its Stable Diffusion-based model in April, while Runway rolled out Gen-2 in March. Both companies keep experimenting with AI, showcasing features like video-to-video generation and 2D video into 3D structure transformation.
If you're curious about what Pika Labs has in store, check out its site. Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Threads, Instagram, Twitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.