Introduction
My name is Weidi Zhang, I am a new media artist based in Phoenix and Los Angeles. I create interactive intelligent systems for moving images by investigating the idea of speculative assemblage at the intersection of experimental visualization, responsive intelligent system design, and immersive media. Currently, I am an assistant professor of immersive media design at Arizona State University. I mainly teach at the newly opened Media and Immersive Experience Center (ASU).
Weidi Zhang
Experimenting with AI Projects
A particular area of interest for me is combining AI system design with experimental data visualization to create immersive, interactive art experiences. I started experimenting with AI systems in 2018 through collaborations with computer scientists. My first AI artwork LAVIN was in collaboration with my friend Rodger Luo who is currently a Principle AI scientist at Autodesk. We used Machine Learning algorithms to create a navigated experience in VR, which explores one understanding of a neural network with artistic imagination. You can learn more about this work here.
Cangjie’s Poetry Installation Mock-up View, 2020, Copyright to Weidi Zhang
Cangjie’s Poetry Project
Imagining a future language in an alternate human-machine reality, Cangjie's Poetry is a thought experiment and a prototype for a future language. I started this project by questioning a possible scenario: If artificial intelligence can create a symbolic system and communicate with humans actively, how will this redefine our co-existence in this intertwined human-machine reality? I was inspired by Chinese folklore of the creation of Chinese characters.
I chose Chinese characters as my inspiration not only because of my constant fascination with the unique aesthetic of the symbols but also because it is one of the oldest logographic systems, which means it is designed based on the appearance and characteristics of real-life objects. It is widely considered to be created by Cangjie, a legendary historian who wants to create a language system that set apart everything on earth. The idea of a logographic system inspire me to train an AI system that can transform real-world images into a cluster of new symbols in real time just like Cangjie did thousands of years ago.
Cangjie’s Poetry project
Teaching the Multimodal System
I collaborate with my amazing colleague Donghao Ren on the implementation and design of this AI system. We use unsupervised learning techniques to model Chinese character strokes and then use the learned model to create novel characters from images. We trained a network using an open-source Hanzi writer dataset, which contains vector stroke data for over 9000 Chinese characters.
Cangjie’s Poetry Visualization I, 2020, Copyright to Weidi Zhang
Cangjie’s Poetry Visualization II, 2020, Copyright to Weidi Zhang
Perceiving and Transfering New Data
Our system constantly observes the surrounding environment via a camera and sends out the live streaming video to our model. Our trained neural network will be able to convert the live streaming video into a cluster of new symbols that is ever-evolving. We did not use the output of the ML model directly as our final delivery.
Instead, we use shaders written in the Open GL shading language to relocate the pixels from the input texture (live streaming data) to a position determined by the output texture (generative symbols). Therefore, we achieve the result of using a pixelated landscape of surrounding environments that dynamically transform in the gesture of writing new symbols using pixels.
Cangjie’s Poetry Installation Mock-up View, 2020, Copyright to Weidi Zhang
Cangjie’s Poetry Installation Mock-up View, 2020, Copyright to Weidi Zhang
Setting Up the Art Installation
In the art installation, a camera is centered in the middle of the space, with parallel screens on either side. The camera will capture real-time images of the participant. The live-streaming video of participants will be sent to an intelligent system that automatically transforms the visualizations.
Two visualizations are generated in real-time by our system:
- a fluid pixelated landscape that is constantly moving and writing new symbols based on our AI system’s observation;
- visual poetry composed of fragments of live-streaming imagery and descriptive sentences (English).
To attach meaning to these pseudo characters, we used a computer vision system to describe salient regions of the live-streaming video. Screen sizes for presenting these two visualizations are flexible based on exhibition space. Audiences are engaged in the visualizations of intelligent poetry written in new symbols and create semantic connections between abstract visual representation and meanings with empathy.
Cangjie’s Poetry Installation Mock-up View, 2020, Copyright to Weidi Zhang
Cangjie’s Poetry Installation VR version, 2019, Copyright to Weidi Zhang
Main Challenges
Our team has been working on this project on and off for the past two years. During the pandemic, we create several versions to accommodate unforeseen situations. For example, we are invited to present this project in San Francisco but due to the social distancing regulation, the whole exhibition got canceled and moved to an online format.
We released an online open call to collect data footage from strangers all over the world and we are very lucky to get submissions from 7 different countries. We then fed all this footage of pandemic daily life into our system for interpretation. Our Cangjie system generates two visualizations as pre-rendered animation and is presented as an unfolding poetry book of collective memories between humans and machines.
Cangjie’s Poetry Special Edition, 2020, Copyright to Weidi Zhang