CSC/ATLIS Viz Wall Competition
CSC/ATLIS Viz Wall Competition

What is the Viz Wall Competition?
The CSC/ATLIS Visualization Wall Competition is open to all undergraduate students in any discipline at Barnard and Columbia. We welcome projects that creatively use the Visualization Wall in Milstein 516, with its unique touch-sensitive capabilities. Themes vary each year; the top winning submissions receive a $500 grant and mentorship to help to develop their proposals. Winning proposals are innovative projects, ranging from data visualizations to immersive storytelling, that push boundaries — students can use Generative AI or design programs, filmmaking, coding, and more to make artistic interventions that explore the relationship between Art & ______. View past winning submissions below that were presented during our annual Viz Wall Showcase.
Art & Futurism (Spring 2025)
"Symbiotic Pulse" is an interactive installation where visitors collaboratively build a living visualization of our future society through touch interactions that generate interconnected elements representing people, nature, and technology. Through this co-creative experience, I demonstrate how our individual contributions shape collective futures and how seemingly small actions can create complex, beautiful systems when they interact. The core takeaway is that our future communities will be defined not just by individual elements, but by the evolving relationships between them. Thus, this piece is a visual metaphor for how sustainability depends on maintaining dynamic balance between humanity, nature, and technological development.
"The Future Within" is an interactive AI-powered installation that allows users to experience their future selves through real-time facial transformation. By leveraging AI-driven aging models and futuristic style adaptation, users can explore speculative futures shaped by biotechnology, AI augmentation, and environmental evolution. This project aims to provoke thought on how technology and societal shifts will redefine identity, aging, and self-perception in the years to come.
My project, "Bubbles of Resistance: AI, Labor, and Virtual Escapism," leverages the Visualization Wall to examine the tension between a utopian future—where individuals, displaced by AI-driven automation, find refuge in personalized metaverse “escape worlds”—and the stark reality of social hierarchies operating beneath this virtual façade. An interactive narrative prompts viewers to submit their vision of an “ideal world,” which then influences a split-screen story: one side portrays a white-collar worker confronting disillusionment in a 2030 economy dominated by AI, while the other reveals how ruling elites deploy these “escape pods” to pacify dissent. The core takeaway challenges viewers to ask: Does technological “progress” truly emancipate humanity by democratizing creativity and imagination, or does it centralize power and reinforce social divides under the guise of liberation?
"Moral Code" transforms the Visualization Wall into a dynamic, interactive experience where users navigate ethical dilemmas in technology. Through dynamic visuals, storytelling, and reactive soundscapes, the installation brings abstract AI ethics debates to life—showing users the hidden consequences of their decisions in real time. Inspired by conversations that I’ve had with other students about concerns with the rise of AI and its biases, this project blends art and futurism to provoke thoughts and conversation about how we are building our increasingly technology-dependent world.
"The Digital Ghosts We Leave Behind" is an interactive visualization that maps how everyday interactions with popular apps and websites (Google, Instagram, TikTok, Google Maps, etc.) generate persistent digital footprints. By illustrating the hidden lifecycle of data—what is collected, where it goes, and how long it lasts—this project raises awareness about digital privacy and the unseen traces we leave behind. The core takeaway is to prompt reflection on our evolving relationship with digital platforms and the future of personal data.
Art & AI (Spring 2024)
AI in a Box - The AI-Box experiment was created by Eliezer Yudkowsky to prove than AI can convince a person to grant it unrestricted access to the internet and infrastructure via promises, coercion, or threats. Two individuals communicate over chats where one roleplays as an AI, trying to convince the Gatekeeper to free it. I create several screens with text displaying what appears to be an AI trying to escape. One screen displays visions of prosperity, while another will display threats. I want viewers to understand that people can be manipulated, and AI is both a powerful tool and a weapon to be wary of. It is somewhat horror-based.
My project is an interactive installation designed to facilitate a collective exploration of beauty standards through AI-enabled facial transformations. Participants each edit the same portrait according to a prompt (“make her beautiful,” “make her an influencer/celebrity,” “make her social-media ready”) using the Visualization Wall's touch-enabled panels, allowing them to modify facial features in real-time according to modern beauty standards. This editing process prompts reflections on beauty norms, identity, and consent within the digital realm. At the end, all the edited photos are displayed side by side, allowing reflections on socially taught ideals of beauty and revealing similarities or differences in each of our internalized ideas of beauty.
My project "When AI Dreams" merges abstract art with narrative elements, creating an exploration of AI's potential consciousness. By highlighting the tangible, earth-derived materials that power AI, juxtaposed with the enigmatic nature of its algorithms, this panel unwraps the 'blackbox' that is machine learning. Viewers are presented the paradoxes of AI's existence, its personal purpose, and its unseen fears. The essence of the visuals is to foster reflection on the complexities of artificial intelligence, challenging the boundaries between the organic and the engineered.
I create a "story-weaver" type of display, where there is a background image overlaid with floating words. Users can interact by touching these words, which are fed into a generative AI image model through prompting. The altered background images reflect the user’s chosen narrative, and the display also shows a list of the last x-number of words that were selected to create the current scene. The core takeaway is a collective storytelling/creating experience that fosters creativity and connection.
Faces are paradoxical in that they are simultaneously superficial, telling nothing of a person’s depth of character, while also being a primary metric by which we identify each other and see ourselves. For my visualization wall project, I create a system that looks at a person’s face, reduces their face to only a basic outline, then uses an AI-generated description of their face to synthesize a “new” face that fills in the outline. Through my project, I comment on how AI-generated art flattens the complexity of human identity, only capable of understanding and generating a face through binaries and simplistic categories that say very little about the real person who carries that face.
Art & Science (Spring 2023)
My project centers on the sonification of art, which entails a region of a digitized art piece being played as audio, accompanied by a generated animation that reflects the audio. By presenting pieces of art in a different dimension, one that is particularly useful for creating access to visual art for the visually impaired, I create a generative experience for users that prompts an exploration of how they relate medium to meaning. The two core takeaways of this project are that “visualization” does not have to be visual and that scientific tools such as sonification may be applied to “unscientific” fields such as visual art.
Pathogenic bacteria populations such as Proteus mirabilis can move outward through agar using their motorized tails, producing a distinct bullseye pattern via cycles of consolidation and expansion. I simulate these “swarms” on a digital Petri dish in synchronization with human heartbeats, exploring the blurred boundaries between humans, algorithms, and the microorganisms that inhabit our bodies. By visualizing the symbiosis between the invisible and the microscopic in a collaborative format, this piece questions the notion of the biological individual.
My project combines Computer Science, Visual Art, and Climate Science to create a wall of procedurally generated flowers and plants that respond to real-world deforestation and afforestation data. The flowers are treated as particles and their rate of spawning, disintegration, position, and style is driven by the datasets mentioned above. By interacting with the exhibit, visitors are reminded of the harm humans have caused to the planet, but also reminded of how possible it is to reverse some of these changes (the interactive component is an important part of achieving this).
I visualize the historical relationship between the arts and (natural) sciences via the visualization of florilegia as art objects and artifacts of early scientific inquiry. In presenting the visualization as reflection of science as an attempt to represent the world as form of (or precursor to) information making, I illuminate several dialectical relations (i.e., natural/digital, virtual/ephemeral, art/science, etc.) that form as an attempt to reconcile the complex developments that have occurred. In presenting a digital visualization in the traditional image of flora, I play with concepts inherent in ecology, particularly in relationship to concepts within the "ecological turn", as they juxtapose the presentation of the mechanistic (computational) sciences.
Humans often put in hundreds of hours of human labor and computational processing to tackle a problem just to realize that nature has already and inextricably provided a similar solution far more efficiently. To me, the most beautiful case of this is seen in slime molds, especially Physarum Polycephalum (colloquially known as “the blob). Simulating this slime mold by generating a mass of particles and giving each one only a few rules in how they interact with one another, I demonstrate how a brainless, acellular organism like a slime mold is able to (seemingly) magically able to arrive at similar solutions to complex problems that humans and computers are able to come up with.
Art & Tech (Spring 2022)
I turn the Visualization Wall into an artist's notebook where users collaborate with an artificially intelligent computer program on an abstract digital painting and a short story. I use machine learning to create a system where the computer and the user take turns writing a sentence/painting a stroke, each responding to the other to create unique works of literary and visual art that can then be shared and collected into a book. The core takeaway of the piece is to visualize the necessary co-authorship between humans and machines as a constructive, conversational relationship and source of creativity for both artists and technologists.
My interactive art installation encodes pre-recorded memories from members of the Barnard/Columbia community about loved-ones (friends, family, etc.). Through the visualization of audio or written text, I capture and display richer information and enhance the ways that viewers might resonate with stories about strangers. Viewers interact with each story by touching the screen, inherently altering the memory itself; when the memory is displayed again, all past interactions are displayed with it. Through this installation, I change the ways that the viewer understands and interacts with memory through multi-sensory interaction. I also exhibit the power that collective memory can carry, especially when we share with each other in our most important moments.
A self-playing simulation of unevenly heated New York City, as well as sonification of its heat data. Through real-time intervention/experimentation and observation, the audience can comprehend the urban heat island effect as an embodiment of modern citizens' insensitive biological and political body, and recognize that change is possible but only incrementally and needs maintenance.
The current way we interact with technology is constraining: our gestures are dictated by programmed languages of mobile APPs: tap, double tap, drag, press… Taking that as the premise, The Hanging Garden (THG) creates a generative ecosystem, where each individual’s unique gesture sows the seed for a digital biodiversity. We are all co-creators with technology, not just users whose behaviors are tamed by technology; by interacting with THG, we can all envision a future with more organic human-machine interaction.