This is amazing! Please tell us, What programs where used to create these amazing animations?
I am continuing development on WorldKit as a solo endeavor now. Progress is a bit slower as I've had to take a more moderate approach to development hours. I took a short break following the failure of the commercial launch, and now I have started up again, but I've gone from 90 hour work weeks to around 40 or 50 hour work weeks. See my longer reply on the future of WorldKit here: https://www.youtube.com/watch?v=CAYgW5JfCQw&lc=UgxtXVCCULAyzrzAwvp4AaABAg.8swLeUjv7Fb8swt1875FAT I am hard at work with research and code, and am not quite ready to start the next fund-raising campaign to open-source, so I've been quiet for a while. I hope to have a video out on the new features in the next few weeks.
Someone please create open source world creator already in C/C++.
We’re living in a very interesting period in the 3d visualization. With the democratization of 3d scanning market more and more companies are integrating photogrammetry into their pipeline. 80.lv talked with James Busby (Ten24 Media LTD) and discussed the current state of 3d scanning and how game developers can benefit from the usage of this technology.
My name is James Busby, Director at Ten24 Media LTD. A specialist 3d capture and character creation studio based in Sheffield, England. I grew up in Northern Ireland and moved to England in 1998 to study media and film at Bradford University. During my time there I got hooked on 3d modeling and animation. Eventually, I went on to create a 5 minute animated film for my final year project which ended up landing me a job at Argonaut Games.
I started out character and environment modeling on the futuristic racing game Powerdrome developed by Argonaut Games. I left there to work at ARK VFX where I did a lot of varied work. Everything from modeling, lighting, animation and rigging. We produced over an hour of FMV footage for Driver 4 Parallel Lines as well as a host of other games advertisements and music videos including The Muse – Sing for Absolution.
I left ARK VFX in 2008 to start Ten24, and was lucky enough to land a big contract doing all the characters for Axis Animations famous Dead Island cinematic. After that we did a lot of character work for Halo 4 Spartan Ops. From there the company started to get going. Chris Rawlinson, my business partner, joined us in 2010. He’s an ex-Sumo Digital Lead Character Artist. It was at this point we started investing in cameras and experimenting with scanning technology, basically to try and save time modeling. Things quickly took off and we landed jobs working on some very high profile productions. Our clients include: Square Enix, Nike, Pixar, Sega, AMD, BBC, The Mill, Warner Bros, Guerrilla Games, Io Interactive, Axis Animation and Realtime UK, to name but a few. In 2012 we started our 3D scan store selling affordable professional 3d scans to artists and studios all over the world, our goal was to share the fruits of our scanning technology with the community at a price point that wasn’t prohibitively expensive. We now have over 1000 models on the store with the aim of adding another 1000 – 2000 over the coming year.
How do you usually approach character creations? What are the essentials to remember while building a character?
Both being from character artists backgrounds in the games and VFX industry, we developed our scanner to augment that side of our work rather than to provide pure scanning services. Although our company has since evolved into a scanning studio, we still work using the principals and techniques we learned during our days as character artists.
First and foremost is a good knowledge of anatomy and keen observations skills are key to creating any character. Even with a highly detailed scan, there are still a lot of sculpting and clean-up work that requires traditional digital modelling skills. Occluded areas can pose a big problem and knowing how they should look as opposed to guessing is a valuable asset.
There are a lot of technical considerations to take into account as well. Key among those is good topology, whether working from a scan or building a character from scratch, planning topology and taking your time to create a mesh that works for that particular character is very important.
Rendering and shading is another vital aspect of the pipeline. Understanding what makes something look real is very important. Getting carried away with tiny pore level details is all well and good but ultimately from a production perspective this stuff is often obscured by motion blur post effects and lighting. For me the most crucial aspects of a render are: Correct subsurface scattering, reflection and eyes. The latter of which can make or break a render.
One of the most interesting developments in the most recent technology of character building is 3d scanning. How does 3d scanning change the way we can now get characters and work with them?
First and foremost scanning provides a fast and accurate way to generate photorealistic reference meshes with textures. A lot of people are under the illusion that once the scan is complete all you do is retopologise the model and hit render. And whilst nothing is farther from the truth, scanning does provide an huge increase in production speeds especially where facial rigs are concerned. Manually sculpting FACS and expressions can take months whereas scanning can produce realistic results in days using off the shelf retopology and wrapping tools such as Wrap 3.
Photogrammetry scanning also provides the artist with textures which again is a huge time saver, and whilst they will require clean-up and editing it’s still a lot faster than painting them by hand.
Could you talk a little bit about your setup? How many cameras do you have?
Our system is a custom built setup based on our last 6 years of research and development. We started with 2 cameras back in 2010 and slowly built the system up to the 180 camera full body and combined head scanner that it is today. Our main setup comprised of 140 body cameras and 40 dedicated head cameras, 9 of which are 50 megapixel Canon 5DSR’s
The scanner differs from almost all the others out there in that it is an integrated head and body setup. Rather than capturing the two separately, we can do both with one shot. The biggest advantage is continuity in terms of texture, scale and neck position. Skipping from one setup to another results in mismatched scans that can be hard to convincingly combine later, especially where the texture is concerned.
The scanning process its self is very simple, the actor stands in the rig, we adjust the head cameras to compensate for their height and then we can shoot as many scans as we like. It is possible to shoot upwards of 1000 scans per session.
Another very important aspect of the rig is the lighting setup. We use 12 strobes to get as flat lighting setup as we can and we’re currently working on a pipeline to completely remove shadows all together particularly under the arms and between the legs for an even more neutral texture.
What advantages does this technology bring to you production process? How does it all work?
The biggest advantage to scanning is the speed increase over traditional sculpting methods. Before we got into scanning producing a fully finished production ready character took about 4 – 5 working weeks, which the scanner we are able to turn out a fully retopologised photo real character in around 6 – 7 days.
Scanning also allows for the integration of real actors into a production, something that is becoming more and more popular these days, particularly in the games industry. There is nothing like adding a big name actor to a game to give it some gravitas.
The biggest challenge we face is still turning a static scan into something that looks and more specifically moves like a real human. The obvious solution to this is 4D capture, whereby we use video cameras scan rather than stills. This in itself presents a lot of problems. A fixed performance can’t be changed and scanning in 4D is a hugely expensive and data intensive process. Our initial full body 4D tests are working out at around 300 gigabytes of photographic data per second at 60 fps. Processing this data requires a huge amount of storage and processing power. I think the solution lies somewhere in the middle, full body scanning that captures a series of BACS (Body action coding system) combined with motion capture and corrective blend shapes is probably the more dynamic and editable way to go for the time being.
Could you discuss the cleaning of the models? How do you prepare the scans?
Cleaning scans is a fairly involved process that does actually require a reasonable amount of sculpting experience. We do everything in Zbrush as it’s the only real way to handle large polygonal objects. Generally, we start with the RAW scan, decimate it down, then fill and holes in the data after which we use Wrap X or Wrap 3 to shrink wrap a base mesh to the model. We then project the high-resolution details back onto the new topology and go in with the clay build up and smooth brush to remove any artifacts and noise. There is always the danger of going too far with the smooth brush and destroying the nice medium frequency details so we like to spend a lot of time on this part of the clean-up process.
Could you talk about the way you are using the scanning technology to get the materials?
The scanner only really captures the colour map so all the materials we create are extracted from this. We are working very closely with Jeremy Celeste and his new de-specular workflow that allowed him to remove the Reflection from the images, from this he is able to generate correct in/out displacement maps, as well as specular maps by subtracting the specular from the non-specular textures. It’s not as accurate as photometric scanning but it’s not far off and it allows us to retain a single shot scan process.
What is the best way to capture a human face?
It’s hard to say, there are multiple ways to capture, photogrammetry, photometric, structured light etc etc.. Certainly, in terms of capturing quickly, photogrammetry is the way to go, fast captures less than 1/10,000th of a second mean that you can reliably capture expressions without having to worry about the subject moving. We are working on a new head capture system at the moment which we will hopefully have up and running at the beginning of 2017 but I can’t really say too much about that at the moment as its top secret.
For most of your rendering you’re using Toolbag. Why do you think this technology works best for the visualization of 3d models? How does this render best work for the visualization?
Marmoset Toolbag is an awesome tool for quickly visualizing real-time characters and getting a good understanding of how they could look in engine. It’s super fast, easy to setup and has some nice real-time shaders which are simple and easy to use. The ability to load HDR’s and test out different lighting set-ups is fantastic.
How do you think the scanning, photogrammetry and capturing technology is going to change the way we are approaching game development? Do you think these things will make the games more expensive, make people’s lives easier?
First and foremost I think scanning tech has dramatically increased both the production speed and the quality of game characters and environments. Whilst some people argue that it’s taking work away from the artist, it’s just a tool, it’s there to provide great reference. Hardly any scans are used ‘as-is’ unless it’s for a face likeness of course. Having a photorealistic 3d scan as a base to build your characters from is about as good a start as you’re going to get. I wouldn’t have thought it would have much of an effect on production costs. The money spent on scanning is balanced against massively reduced production times in our experience.
James Busby, Director at Ten24 Media LTD
Interview conducted by Kirill Tokarev.