logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Ziva Dynamics Announces A New ML-Trained Facial Rigging Service

The new cloud-based system was trained using a 15TB library of 4D scan data.

Ziva Dyamics has launched ZRT Face Trainer, their new machine-learning-trained cloud-based facial rigging service meant for games and real-time work.

The new toolkit, available for selected users to test for free, allows transforming "any face mesh into a high-performance real-time puppet in one hour".

The Ziva team states they trained the new cloud-based automated facial rigging platform to recreate a range of expressions of human actors using a 15TB library of 4D scan data. The system is said to convert uploaded character head meshes into a "real-time puppet" that can express over 72,000 facial shapes within an hour.

Features:

  • The ZRT Face Trainer is built on a library containing over 15TB of 4D scan data.
  • ZRT face puppets can express over 72,000 training shapes, as well as novel face poses.
  • ZRT face puppets are only 30MB at runtime and run at real-time frame rates (3ms/frame on a single CPU thread).

Please note that in order to process head meshes, they first need to be cleaned and retopologised manually to match a standard point order. They also need to be subdivided for convincing deformations.

You can learn more about the process here or visit the product page and sign up there. Also, don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 1

  • Anonymous user

    Looks great, but can it do real-time speech to text in conjunction with text flow from my intelligent machine software that I developed. Old tech but trainable and can be an expert system. Best there is.


    0

    Anonymous user

    ·2 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more