logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

80 Level Digest: Most Important Tech & Software Releases of 2022

We conclude our series of celebratory 80 Level Digests by recapping the most important software releases of the passing year.

In case you missed it

Check out our lists of 2022's best environments and characters

Hello everyone! Over the past couple of weeks, we've been celebrating the passing year, highlighting our favorite digital characters and environments of 2022 and revisiting the awesome projects that inspired us over the year.

Today, we conclude our series of celebratory articles with this year's final 80 Level Digest, dedicated to 2022's most important tech and software releases, notable events, and incredible programs that changed the workflows of thousands of artists and altered the landscape of digital art in its entirety. With today's list, we'll revisit the most significant upgrades to our favorite 3D software, recap this year's new tools and plug-ins to the existing ones, and, of course, have another look at various AIs that emerged in 2022.

So, without further ado, let's get started!

The year started strong with the release of Plask all the way back in January. Developed by Plask AI, Plask was a web-based, AI-powered 3D animation editor and motion capture tool that provided thousands of animators with a robust mocap toolset available for free. Most importantly, the application came with a neat ability to animate digital characters using any video as a mocap, thanks to the assistance of artificial intelligence.

The next release we would like to highlight is, coincidentally, had also been designed for motion capture but was released as a free add-on for Blender back in February. Meet BlendArMocap, a markerless mocap solution that allows its users to perform hand, face, and pose detection using a webcam. Moreover, the plug-in boasts the ability to easily transfer the detected data to Rifigy rigs, allowing for a convenient and seamless workflow.

The same month, Engineering Student Priyanjali Gupta impressed us with a fantastic and incredibly useful AI model capable of translating American Sign Language into English in real-time. The model used Tensorflow object detection API and was built using transfer learning from a pre-trained ssd_mobilenet model.

Another great AI-powered model that we'd like to revisit is Instant NeRF, a cool tech that could turn several 2D images into a 3D scene, released by the NVIDIA Research team back in late March. Powered by Neural Radiance Fields, a special AI that can train to reconstruct a 3D scene from a handful of 2D images taken at different angles, Instant NeRF could process a bunch of shots within seconds and render the resulting 3D scene within tens of milliseconds.

In early April, 3D art and game development communities were shook by the release of the first stable version of Unreal Engine 5. The long-awaited version of the engine, which had originally been unveiled in 2020, brought the now beloved Lumen and Nanite features, upgraded tools for creating super realistic details and setting up lifelike physics, and countless other features and upgrades that drastically improved the workflows of artists and developers alike.

Alongside Unreal Engine 5, Epic Games also shipped Lyra, a starter game designed to help the creators to get the hang of the engine's new version, and shared the City Sample, which featured a full city, buildings, vehicles, and crowds from The Matrix Awakens demo.

And so it begins. Just a couple of days after the release of UE5, OpenAI unveiled DALL-E 2, an AI-powered text-to-image model that popularized AI-generated images and started the ongoing trend, boosted by Midjourney and Stable Diffusion further down the line.

Upon its release, however, DALL-E 2 was warmly welcomed by the community thanks to its powerful capability to create realistic images and art from a description in natural language, the ability to make realistic edits to existing images, and more. The release of the model marked the beginning of the "AI boom" that took place throughout the entire year and of which we're yet to talk.

In mid-April, a team of developers presented SNUG, an impressive neural network for adding 3D deformations to outfits worn by parametric human bodies. Trained with a scheme that removed the need for ground-truth samples, the framework enabled the team to interactively manipulate the shape parameter of the subject while producing highly realistic garment deformations without using any supervision during train time.

With nothing particularly interesting happening in May and June, July made us happy with the release of Buildify, a free Geometry Nodes-powered library for Blender developed by Pavel Oliva. The toolkit enabled thousands of Blender Artists to assemble new buildings in no time, offering easy-to-use tools for extruding, copying, and pasting faces, with the buildings themselves generated automatically.

Also in July, researchers from NVIDIA and Stanford University unveiled EG3D, a hybrid explicit-implicit network architecture capable of generating high-resolution multi-view-consistent 2D images of human and cat faces in real-time and giving generated images high-quality 3D geometry. Leveraging state-of-the-art 2D CNN generators, such as StyleGAN2, the model was created to improve the computational efficiency and image quality of 3D GANs without overly relying on approximations.

The aforementioned "AI boom" was propelled to new heights in August when the StabilityAI team released the code of Stable Diffusion, the team's text-to-image diffusion model capable of creating great images from text prompts and rough sketches. Besides providing tons of people with a robust free-to-use toolset to generate their ideas in just a few clicks, this move also changed everything by encouraging other developers to open-source their AIs as well. 

In September, with the "AI boom" being in full force and AI enthusiasts mastering text-to-image models, the developers of Runway, a Web-based machine-learning-powered video editor, went a step further by introducing an awesome text-to-video feature. The AI-powered system allowed Runway's users to generate videos using text descriptions in natural language and edit them by providing short prompts.

Having robust software is great, but powerful hardware that would support your programs is no less important. Luckily, in 2022, NVIDIA and AMD shipped their new GPUs for both 3D Artists and Game Developers to take advantage of.

The former introduced the GeForce RTX 4090 and 4080 graphics cards, powered by NVIDIA's Ada Lovelace architecture. The first one brought 24GB of G6X memory, while the second one featured two memory configurations: 12GB and 16GB, with both being way more powerful than the models from the 3000 series and featuring the third-generation of Deep Learning Super Sampling (DLSS 3).

AMD followed suit by also unveiling two new GPUs – the RX 7900 XTX and RX 7900 XT – powered by the company's RDNA 3 architecture. Both cards featured DisplayPort 2.1 support and enabled their users to run games at 1440p and even 4K at triple-digit frame rates. The first GPU boasted 96 compute units clocked at 2.3 GHz and 24GB of GDDR6 memory running at 384 bits, while the cheaper XT model offered 84 compute units with a base clock speed of 2G Hz and a little slower 320-bit memory bus.

In late September, NVIDIA's researchers impressed us yet again with GET3D, a generative model capable of creating fully-textured 3D meshes with complex topology and rich geometric details. Trained with a collection of 2D images, the AI came with the ability to generate a huge variety of assets, including cars, chairs, animals, motorbikes, human characters, and buildings. According to the team, the model had been created thanks to this year's successes in differentiable surface modeling, differentiable rendering, and 2D GANs.

During the Adobe MAX 2022 conference, which took place in October, the Adobe team joined this list by officially releasing Substance 3D Modeler 1.0, the company's tool for digital modeling and sculpting.

First introduced back in late April, the software had been developed by the Substance 3D team in collaboration with the creators of the VR sculpting tool Oculus Medium. The tool's most notable feature was that it allowed 3D artists to sculpt and assemble their projects on both desktop and VR, changing the mode at any time.

In early November, the Sparseal team impressed us and simplified the workflows of thousands of artists by presenting CozyBlanket 2.0, a new and improved version of the team's incredible retopology app for iPad.

The new version of the application offered overhauled viewport rendering for better contrast and readability at the retopology stage, added the ability to draw, delete, and extend UV seams by drawing over the mesh edges at the unwrapping stage, introduced the ability to create tangent space normal maps and color maps from the imported Target mesh at the baking stage, and countless other improvements, allowing iPad owners to turn the most dreaded part of any working process into a convenient and enjoyable experience.

A week after the release of CozyBlanket 2.0, we were astonished by a software called Move AI, a convenient program for markerless mocap, capable of extracting natural human motion from videos using advanced AI and automatically retargeting the data to your character rig. Moreover, the tool came with the ability to export the data directly into any game engine or digital environment, allowing for a more convenient and seamless workflow.

The passing year's final month began with the release of RealityScan, a 3D scanning app that allows its users to turn smartphone photos into high-fidelity 3D models, developed by Epic Games in collaboration with Quixel and Capturing Reality.

To get started with RealityScan, all you need to do is take photos of the object you want to replicate in 3D using your iOS-powered smartphone or tablet, and the application will automatically assemble the model using cloud processing. From there, you can upload your model to Sketchfab or download it for use in the 3D software of your choice.

With 2022 being a year that saw an unprecedented increase in quality and quality of various generative AIs, it would be fitting to highlight OpenAI's most recent, and arguably the most mind-blowing, AI model ChatGPT as this year's final release.

Unveiled in early December, ChatGPT is an AI-powered model capable of interacting in a conversational way, answering follow-up questions, and providing detailed responses to the input prompts. In less than a month, the model was shown to be able to create a movie outlinePython scripts for Blender, a "Choose Your Own Adventure" story, a step-by-step tutorial on using Unreal Engine's Blueprints, a rap battle between fintech and banks in the style of 2Pac and Notorious B.I.G., and much more.

As said before, ChatGPT is the final release of this year's recap. However, there is one more event that we deem to be one of the most important occurrences of the year.

As Newton wrote, every action has an equal and opposite reaction, and with the sheer popularity of AI-powered image generators that emerged this year, the appearance of a movement that would oppose all-things-AI was only a matter of time.

That came true on December 14, when thousands and thousands of 2D and 3D Artists, representatives of all branches of digital art imaginable, posted the same "Say No to AI-Generated Images" image and demanded that ArtStation removes AI content from the website, citing multiple reasons for it to do so, including the fact that generated images diminish the work of human creators and make it harder for employers to find talent on the platform.

The campaign can be considered ongoing, with creators still protesting against AI content and threatening to leave ArtStation altogether if their demands are not met.

On that note, we conclude our list of this year's most important tech and software releases and notable events, what do you think about the list? What releases do you think were the most important? What do you think the future holds? Leave your thoughts in the comments below or on our Reddit page, Telegram channel, Instagram, or Twitter.

Thank you and have a wonderful holiday season!

Join discussion

Comments 3

  • Dubois Peter

    People are celebrating AI until they realize it’s taking their jobs and makes Google, Nvidia and other major corporations to the winners.
    Please make a difference between „helpful tools for artists“ and „replacement of artists“.

    0

    Dubois Peter

    ·a year ago·
  • SentinelForce Jacob

    Forgot the part where AI is scraping and stealing all the artwork it's using to be trained without consent. Seems like a pretty huge short site here.

    0

    SentinelForce Jacob

    ·2 years ago·
  • Anonymous user

    NO AI is Stop the steal of real artists to compete with them.

    0

    Anonymous user

    ·2 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more