Bartosz Sowa spoke about the Last Login project, discussing how he built the scene piece by piece and how he used Ambient Occlusion and Curvature Maps to get the old look on the computer and the telephone.
Introduction
Hello! My name is Bartosz Sowa, and I'm a 3D Artist from Poland. I'm currently a full-time student at Think Tank Training Centre Online, enrolled in the "CG Asset Creation for Films & Games" program. I'm in my intermediate term and have chosen to specialize in Prop & Environment Art for Games.
My journey into 3D actually began during my first year of architecture studies. I quickly discovered that while architecture itself wasn't my path, the 3D modeling aspect of it was fascinating. I decided to pivot, and for the next two years, 3D became my main hobby.
I taught myself the basics using Blender and Substance 3D Painter, creating simple assets purely for the joy of it. When I decided I wanted to pursue this as a career, I enrolled at Think Tank to take my skills to a professional level. As for projects, my current focus as a student is on building a strong portfolio.
Last Login is my latest personal piece, where the primary goal was to faithfully translate a beautiful 2D concept into a high-fidelity 3D render, focusing on mood and realism rather than game-engine optimization.
Last Login
The project started with a clear directive: find a compelling 2D image and translate it faithfully into a 3D scene. My main criteria for the concept were that it had to be grounded in realism (not stylized) and that it had to be genuinely challenging. After a long search, I stumbled upon the Flickr gallery of "Decentra", an artist who photographs abandoned locations.
The moment I saw the photo that became Last Login, I knew it was the one. The complexity of the scene, the inherent storytelling, and the atmospheric composition immediately captivated me. I presented the concept to my Foundation Term supervisor, Romain Côte, who approved it.
To be honest, I wasn't entirely sure how I would execute it. This was a Foundation project, and I had virtually no prior experience with the core software pipeline (like Maya, Mari, and V-Ray) before starting at Think Tank. The biggest challenge, in my mind, was the background wall with the peeling wallpaper, it seemed incredibly complex, especially given the tight 4-week deadline.
Before I started modeling, my first step was to bring the original photo into PureRef. I broke the entire scene down into primary, secondary, and tertiary elements. The photo served as my main guide for composition and lighting, but I knew I'd need to gather a lot of additional real-world references for each asset to build them accurately.
Workflow
My composition was 100% based on the reference photo, so my first and most critical step was achieving a perfect camera match. For this, I used fSpy, a tool that analyzes an image to find the camera's original focal length and orientation. I imported the resulting camera directly into Maya, which gave me a solid foundation and an "image plane" to work against.
With the camera locked, I knew everything I modeled would align perfectly. I started the blockout with primitive shapes, first matching the room's boundaries and the desk's scale. Once I had the main forms feeling correct against the photo, I began iteratively refining them, replacing one blockout model at a time with a detailed version.
To get the details right, I relied on my PureRef board. For example, the main photo didn't show the back of the computer or all sides of the phone, so I gathered additional references of those specific models. The keys were modeled quickly and accurately by importing top-down photos of them onto an image plane and tracing the shapes.
For the coiled telephone cable, I had a specific workflow. I first modeled a simple plane that followed the cable's path from the camera's view. I then extracted its center edge, converted it to a NURBS curve, and used that curve as a Deformation Path for a Helix primitive. This gave me precise control over the cable's final flow and shape.
The most difficult part of the modeling stage, as I had feared, was the peeling wallpaper. I spent a lot of time experimenting, but no simulation or manual modeling felt right, it didn't match the unique "feeling" of the reference. The best and most effective workflow turned out to be nCloth.
My idea was to create a high-resolution plane, attach its top vertices to a passive "ceiling" collider, and then run the simulation to let it hang and curl naturally. Once I had the basic "peeling" shape, I began artistically deleting faces from the plane to match the exact torn patterns from the reference. This combination of simulation and manual editing gave me the perfect result.
The scattered scraps of paper were also created using a simple nCloth simulation, letting them fall and settle naturally on the desk. As for the dust, it wasn't just a texture. I used a complex Height Map fed into a V-Ray Displacement node at render time. This gave the dust real, albeit microscopic, geometry that caught the light and shadows correctly, adding a significant layer of realism.
About time-saving tricks and retopology, and for this project, the two are directly related. The biggest time-saver was the conscious decision to skip retopology. Since the goal was a high-fidelity V-Ray still, not a game engine asset, I didn't need a low-poly version. This was a massive advantage. It meant I could focus 100% on building models with good subdivision topology (all quads, clean edge flow) and render the smoothed mesh directly.
This easily saved me what would have been at least a full week of production time. Additionally, I was very efficient, only modeling what was visible to the camera. However, I still needed to unwrap everything for texturing. I used the standard Maya UV Editor for this. The entire project was organized using a UDIM workflow. This was essential for working in Mari and was the best way to achieve the extremely high texel density needed for realistic, high-resolution textures.
Texture
My main texturing tools were Mari and Photoshop. Given the short 4-week deadline, efficiency was everything. My workflow wasn't about creating materials from scratch, but about the advanced editing and customization of high-quality base textures. I would import a base material into Mari, like a clean plastic, and then begin the customization.
My first step was always Color Correction and adding color variation, aiming to get that specific, yellowed, aged-plastic look from the reference photo. Next, I worked on the decals. For the computer, I managed to find and recreate the exact stickers in Photoshop. For the left phone, to save time, I created a new sticker from scratch in Photoshop that was visually similar and worked for the shot.
After that, I began the aging process, layering dirt and damage. I relied heavily on procedural masks driven by Ambient Occlusion and Curvature Maps to realistically build up grime in the crevices and wear on the edges, matching the "oldness" of the objects in the photo. The final layer was the dust, and as I mentioned, I exported its Height Map to use with V-Ray Displacement.
Most assets followed this process, but an unexpected and fascinating challenge came from the two pictures hanging on the back wall. I tried finding them online with a reverse image search, but Google gave no answers, it was as if they were unique, personal photos.
My solution was to take a screenshot of them from the original concept and see if AI could help. I fed the low-resolution screenshot to ChatGPT and prompted it to help me restore and recreate the images. The results were absolutely astonishing. It provided me with near-identical, FullHD images that I could immediately use.
After some minimal color tweaking in Photoshop, they were ready. It was a complete surprise that this workflow was even possible, and it saved me a huge amount of time.
The Final Scene
Assembling the final scene was actually straightforward because I had been working "in place" the entire time. Since I locked my camera on day one using fSpy, my Maya file was the final scene. The process simply involved iteratively replacing the simple blockout shapes with the final, high-poly Sub-D models. The composition was therefore locked from the very beginning.
For scattering details, the scraps of paper were simulated with nCloth, but other elements like the keys, the chain, and the keyboard keys were all meticulously hand-placed, referencing the original photo to match every angle and position perfectly.
For the lighting, my setup in V-Ray consisted of several V-Ray Rect Lights. I had one main light to establish the key shadows, and then several fill lights to illuminate areas that needed it. A key technique I used was to create "bounce cards" (simple planes). I pointed some of my Rect Lights away from the scene and onto these planes, which allowed the light to bounce back and create extremely soft, natural-feeling shadows in certain areas.
The one thing I struggled to get right in render was the subtle, ghostly reflection on the monitor screen. After many attempts, I decided it would be faster and more art-directable to add it in post-production.
Before I could get to the final render, I hit a major technical hurdle: my render times were becoming extremely long, and the scene was very heavy for the engine. This was due to the dozens of high-resolution .tiff textures from Mari.
My supervisor suggested I convert all my textures to tiled, memory-mapped .tx files. I researched the process and, with the help of ChatGPT, I wrote a small Python script to use inside Maya. This script automatically batch-converted all my .tiff files to .tx and then relinked them in the Hypershade.
The impact was immediate. My render times and memory usage dropped significantly, which allowed me to do fast test renders and iterate much more efficiently.
For the final render, I used the V-Ray Bucket sampler, matching the concept's original resolution of 2449x1621. I rendered to a multichannel EXR file, which embedded all of my Render Elements (AOVs). My crucial passes were Cryptomatte (for masking), Specular, Reflection, Denoiser, Diffuse, and all the individual light passes for the V-Ray Light Mix. My entire post-production workflow was done in Nuke.
Thanks to the Light Mix passes, I was able to extract every single light from my scene and use Grade nodes to tweak its intensity and color individually. This meant I could essentially "re-light" the scene in post, looking at my reference photo and meticulously matching the contribution of each light, one by one.
Once the lighting was locked, I added a final color grade, a subtle vignette, and a layer of noise to unify the image. My very last step was to take the final EXR into Photoshop for one tiny fix: to paint in that subtle monitor reflection.
Conclusion
The entire production time for the piece was exactly four weeks, not including the initial concept search and breakdown. This was all done as part of my Foundation Term at Think Tank Training Centre. The main challenge was, without a doubt, the ambitious scope of the project versus the tight deadline.
I worked on this piece for many hours every single day, and it took a lot out of me to get it across the finish line. However, because I set such a high bar for myself, I learned an incredible amount in that short time. I faced constant challenges, from the technological to the creative-technical ones, like finding solutions for the peeling wallpaper or the scene optimization that forced me to learn about .tx files and Python.
The most important lesson I learned was not to be afraid of technical problems. At the end of the day, a problem is just a challenge, and overcoming it is where you gain the most experience. Another huge lesson was the value of community. I learned to ask more experienced people for help, and I spent a lot of time on Discord with my classmates. We were all working on our own projects, but we'd stream our work, and you could see a cool trick someone was using, learn from them, or get instant feedback on your own progress.
That peer support was invaluable. My advice to artists who are just starting is simply to finish your projects. The longer I look back at Last Login, the more things I see that I would fix or do differently today. But in the end, it's my first major, completed piece. A finished project, even if it's not perfect, is infinitely more valuable than a perfect idea you never completed. You learn everything from the process of finishing. I wish everyone the best on their own creative journeys.