Hi, I’m Ruben, a Designer & Developer based in Japan since 2006, working mainly on app development and web design for local clients (Honda, ANA, LoFT). Since last year, I increasingly started doing more VFX and interactive experiences in Unity using photogrammetry.
I started back in 2001 as a Flash Actionscript developer for websites and games. That allowed me to explore both coding and visual design, and I’ve been moving between those directions since then.
I’ve also always been very interested in visual arts, interactive installations, and computer graphics. In particular, the minimal and essential nature of shader programming is extremely fascinating to me.
Future Cities Project
I started exploring and combining my interest in design, interactive art, and technology in 2018, with the Future Cities installations with photographer Cody Ellingham and musician SJF.
We were experimenting with different ideas and came up with the concept of an experience that would take people on dream-like journeys through deconstructed, fragmented cities.
I had experience with Unity and shaders, so I started right away to build the engine to run the installation and the coding to render the clouds, and after a few months (and quite several long nights) had a first version ready.
Different parts of the city were scanned using photogrammetry and recombined in unusual ways - streets mirrored, turned upside down, or leading to completely different areas, creating a world that still looked real but yet unsettlingly different.
The audience could interact with the installation using a controller in the middle of the room, while we were performing live, dynamically altering the 3D world projected on the walls.
In 2019 we had shows in Tokyo, Taiwan and with the support of Sony New Zealand, we brought the experience to Wellington for a 1-hour live performance at the National War Memorial.
I see volumetric capture as a new medium to record reality in the era of VR/AR. We have been using photos and videos to tell our stories, for quite a long time. But I believe in a few years it will be natural to take 3D snapshots of our life or to document historic events that way.
In my works, I follow that direction, trying to get as close as possible to capture real-life and street scenes using 3D data.
To be able to scan as quickly as possible, I use a couple of 360 cameras and sometimes drones and small laser levels to acquire the footage.
I then process the data with a series of scripts for image analysis and optimization, before feeding it to the photogrammetry pipeline.
This technique allows for extremely fast scans of reasonable quality. That means I can basically “freeze” the hectic nature of a morning market in Asia in less than a minute, capturing people in their daily life.
And here's point cloud raw data:
Working on Dissolving Realities
While photogrammetry is being used quite often to create photo-realistic models, I’m more interested in embracing the fuzzy nature of point data and work on the real-time shader to build a more cohesive and organic 3D image.
In Dissolving Realities, I use point clouds of around 50 million points. I load the point data in Unity in batches of different sizes depending on the distance and direction from the camera, and passing those data to a geometry shader I developed to “draw” those points on the screen, while also taking care of all the effects and animations.
There's a lot of optimization in making sure I only draw what I need to draw, with specific levels of detail.
I’m using a function in my shader to move the points organically in specific directions, taking into account also their angle, luminosity as well as a rough calculation on density to simulate a cascading effect. Besides falling I'm also changing the way they're rendered, pushing up their brightness and creating the "burning" effect.
I see the effect as closely resembling lost memories. Clear images, almost in reach, but suddenly collapsing, dissolving and fading when you try to get a closer look.
In my latest video (see below), I've also experimented with other effects of "fragmentation of reality" and a way to transform points into splines. I like experimenting with the idea of deconstructing worlds for then reassembling them into their original state.
Here're several screenshots:
Until a few months ago, I was using mainly a MacBook, so I had the necessity to keep everything as simple and optimized as possible. Another thing that helps is that every single aspect of the visualization, from drawing the points to vertex lighting and animation is managed by the shader. In this way, I can move and animate easily tens of millions of points on modest hardware.