Well, small/medium intuos pro is way cheaper that iPad Pro + pencil... just saying... And it works better with ZBrush...
It might ultimately be proof of concept now, but the point of showing a low-count bounce raytracing that still looks decent especially after denoising gives us a nice roadmap on the future. Maybe given time, we will move to this as the new standard or at least a probable alternate to baked lighting.
Fuck you I'm stuck in some bullshit game some dickhead thought would be exciting.
Maxime Lhuillier from CNRS/Institut Pascal/UCA demonstrated a technology that is capable of creating a 3D model of a space from a 360 video taken by a simple camera. Sounds crazy, right?
The process is very simple. Maxime Lhuillier used a Garmin Virb 360 camera to record a 360 video. Then, the software generated a 3D model of a space as complex as a forest using that recording. Yes, the image quality is poor compared to results given by professional cameras. What is more, backlit portions of the videos have less visible detail after image compression. Still, just think about converting a 360 video into a 3D model with a simple laptop!
There are several limitations, though. A scene should be static with sufficient texture and light. Also, the camera motion must be slow enough and you should walk slowly. The idea here is not to get a perfect 3D scene, but to visualize a space using consumer cameras.
Check out these two models generated from videos taken by a 360 camera rig consisting of four helmet-mounted Gopro Hero 3 cameras recording video at 100 fps with frame-accurate synchronization: