Donald Trump, insulation is a seamless wall with airpockets. Ceilings can be printed using a re-enforcing scaffold for support. Try googling info..
Really awesome work and the tutorial is fantastic. Thanks for sharing.
Absolutely no information about the 4.2 release - was it ever released in September. There is about as much information on trueSKY as there is in any of the so called products that use it. For me this lack of transparency is killing there business and points to fundamental issues with the technology. Google trueSKY in YouTube and you'll hardly get any information at all. For such a ground breaking technology this is very suspicious. Do they not have a marketing team - do they even care? Sounds like a very small company which wishes to remain small and doesn't understand what they can become because with the technology they have they should be targeting a bigger profile, revenue streams and audiance than they have and the lack of foresight here with the Simul management is quite frankly very disapointing. Another 10 years could easily disapear for these guys and they will simply remain a small fish. Very sad.
We present to you our interview with Eisko about the techniques behind modern 3D capturing.
Eisko is a 3D company specialized in the capture, reconstruction and representation of digital doubles and celebrities. Our services vary from fully textured static models to animatable characters, animated performances and complete holographic shows.
Our company was born in 2011, at the crossroads between major European academical and industrial research projects. Our founder Cédric Guiard, who had specialized in Computer Graphics and Computer Vision technologies during his PhD, gathered a team of engineers and designers around a new approach to 3D capture: analyzing and reconstructing models based on the direct acquisition of raw data. The result is a hyper-realistic, true-to-life model made of physically-based materials, ready to be used both in real-time and offline environments.
Some of our latest projects involved world-class tennis players for a future video game, the capture of celebrities for a VR commercial, and a yet undisclosed feature film. Perhaps the most interesting one was the recreation and animation of late Dutch superstar Andre Hazes as a hologram for a posthumous show in Amsterdam, with impressive results and enthusiastic reactions from the public. Seeing the digital double of a departed icon back on stage was quite an experience and surely not the last one.
What makes modern 3d capturing easier than it was years ago?
Well, the development of the two main photogrammetric pieces of software from Agisoft and Capturing Reality played a great role in this democratization. Additionally, the rise of a wide range of 3D industries has resulted in the birth of a rapidly growing community of users who share techniques and results as more and more open source solutions are becoming available. There are very exciting times ahead with the advent of mobile scanning and tracking technologies, too!
We firmly believe that new applications and uses for 3D technologies will keep on sparking interest from professionals and laymen alike, while software evolution will play a great role in making 3D digital content creation more accessible than ever before.
However, the quality and accuracy required by some demanding shots, games or projects sometimes call for a more advanced capture setup, reconstruction & animation technologies, and a “savoir-faire” that companies like ourselves can provide. Providing a PBR output is also a huge advantage for offline or real-time environments.
Eisko’s R&D team has built two unique capture systems from scratch, both easily portable, tailored for the human face and body. We’re also really proud of our custom-made reconstruction pipeline, which really speeds up the process and limits human intervention while leaving great room for client customization.
The result is a very cost-effective way to produce highly accurate photoreal digital doubles, without having to re-sculpt and re-texture the whole model by hand.
The first system we created was designed to specifically capture the human face and its expressions. It is a 3m3 sphere composed of thousands of LEDs and several special “material cameras” which we built to specifically record accurate skin properties. Hardware wise, it was designed with portability in mind: it is small, easy to set-up and to fly overseas just for a few hours capture session.
The second system is more of an all-rounder, designed to scan full bodies and objects of bigger volumes. Its icosahedron shape has a nodal structure with each of the nodes consisting of a mixture of cameras and lights. Those can be removed or added at will to modify the volume or the framing, hence making it both flexible and transportable. Thanks to our two systems, we’re able to capture both the face and body with lifelike accuracy!
It’s always a difficult subject to tackle, especially to get a proper unlit, albedo color for the skin. As you all know, one of the biggest issues is the complex layering of such organic material and that the human eye is very familiar (and critical) towards it. This is why our system was first designed to dissociate skin material in terms of optical properties (complexion, pigmentation, coloring) and geometrical properties (low and high frequencies, pores, vesicles, micro relief). You can read more about this on our blog.
In that regard, it was always a paramount concern for us to get physically accurate materials and being able to single them out. This comes more than handy further down the track to extract our own “traditional” set of textures such as roughness/gloss, transmission, scatter, cavity, etc. Eventually, we use them to refine our high definition model and therefore have little to no work left to do to achieve photorealism and fidelity. It’s undoubtedly human skin, but more importantly, it’s your skin.
What’s more, the power of nowadays engines, whether they be real-time or offline, even on mobile, have made great progress regarding the shading, especially with SSS capacities. When you are able to feed them good quality source data, it’s much easier to get photoreal skin. So yeah, it’s more about getting the proper input than spending days and days tweaking shaders.
However, capturing good data to create a believable eye rendering remains a challenge, especially in tricky areas like the junction between the skin of the eyelid and the sclera. It took us some time to achieve something that we were happy with.
For any capture, we usually consider 4 poses as a minimum basis. One neutral with eyes open, one with eyes closed. Then we capture two extreme expressions: compressed and uncompressed, which is quite standard in the industry. The idea is to get all the basic wrinkles and muscular deformations that we can combine in a shader later on in order to dynamically modify the render of the expression during the facial animation. Here is a short article we’ve written about this process.
Then we capture a large range of expressions, emotions, visemes, and deformations used to create the entire set of blendshapes for the character. Once we have captured the standard set, we use our reconstruction pipeline to quickly process all the poses, after which they get sorted out and placed in a rig ready to be animated.
But we are obviously free to improvise and reconstruct any special expressions as well, and that’s the fun part. In the end, it relates in many ways to a usual on-set actor-director relationship where, depending on the needs for a certain shot or a certain use case, you would guide the actor towards a pose, and even sometimes push them a little to force them to use muscles they never knew they had!
An animated rigged character head
We have always invested heavily on R&D to improve our technology and create new tools. As a part of this process, we are releasing a brand new, updated version of our rig which you can preview by downloading a free simplified sample. Just go to our website, get your own copy and start playing around!
We based this new version on the captured shapes from the model for our last static datapack. Localizing the blendshapes required some work so that animators could tweak different parts individually. You can see the result for yourself through the animatable digital double sample we have just released, which comes with a bunch of Maya shelves that contain some useful shortcuts for people who want to try their hand at animating the head model.
The upgrade of our rig allowed us to improve the quality of our animatable digital doubles, making the production process both faster and simpler. And since compatibility is an important factor for us, we are also working with motion capture specialists from all around the world to ensure that our rig is compatible with every single solution and software available today.
This improvement is only the beginning of many others that we kickstarted a few months ago. We already have a roadmap of little things that we want to implement in the near future, so make sure to stay tuned not to miss anything!
You can reach us directly through our website or even on our social media pages like Facebook, LinkedIn or Twitter, whether it’s for a static model, an animatable digital double or a fully animated performance. You can also catch us at most international events like the Siggraph or the GDC. The best way yet is of course to directly visit us in Paris – we’ll gladly show you around and talk digital humans on our rooftop terrace, with a nice view of the Eiffel Tower and city!
Also, don’t hesitate to download our model, play around with it and to drop us some renders or anims you’ve created! And if you have any questions or comments about our process, job applications or anything digital human related, we’d love to hear about you.