Virtual Atrium

Milo Bonacci
5 min readDec 23, 2020

As a collective space, the spatial allocation and use of the Graphic Design Atrium is in constant flux. It appears again and again in many thesis books over the last two decades; sometimes as a backdrop, sometimes as a set, and other times it’s referenced as a sort of shorthand for a collective consciousness; the Atrium has thoughts… The Atrium has memories.

The project started with a simple goal; to expand a physical space through virtual means. With the onset of the pandemic, the communal core of the shared studio space was decentralized, compartmentalized and the vibrancy of the physical space often seemed as a distant memory. The project was conceived as a way to bridge the gap between the actual and virtual spaces we find ourselves in, by providing a platform for exposure to the ambient sounds, dialogues and conversations that would have otherwise been encountered first-hand.

A portion of the original capture—100 photos-turned-3D-model via ReCap.

For the first iteration, a photogrammetric model of the atrium was generated from a series of 600 pictures — through such a process, the software tends to render light and shadows with dimensionality and often misinterprets reflections for depth of space. The imprecise process resulted in a three dimensional digital model that occupies multiple scales simultaneously and at times echoes itself. The output is that of the computer trying to make dimensional sense of the array of two-dimensional photographs. Subtle variation of lighting, angles, and changes in the space itself would lead to very different outputs.

The original intent was to provide an accessible (as in, no VR goggles required) virtual space that could be occupied and explored online, into which the live ambient sound from the actual physical space would be broadcast into in real time. Conceived as the input for the live sound, a pair of “listening posters” were intended to be centrally hung from the atrium ceiling, and would act as large diaphragm sensors swaying in the breeze and absorbing sound; expanding the “space” of the place through the broadcast sound. However, for the time being, the live sound broadcast portion of this project has been shelved in favor of field recordings and curated audio pieces. As it turns out, embedding live sound into a WebGL model is not exactly quite as straightforward as I had imagined and was not without its own drawbacks.

A mock-up of the “listening posters” installed in the space intended to broadcast ambient sound into the Unity WebGL model.

Because of these complications I decided to revert back to the sound files nested within the standalone app; populating the virtual space with “audio easter eggs” placed into the model as a potential way to tell a story of a place and its changes. Using the photo-derived digital model as a platform, I began to wonder about the possibility of publishing sound in a nonlinear format, independent of streams and feeds, but hopefully encouraging a sort of aural foraging with each update. Could the model function like a three-dimensional radio dial? Could each new “issue” include a new photogrammetric model of the atrium, or perhaps a more localized scan that pertains to the piece? In this version the act of scrolling is replaced by a three-dimensional, first-person exploration; unexpected encounters with invisible audio sources have potential for crosstalk and unplanned sequences.

In this model there are two types of audio sources; ambient and curated. The ambient sounds are relatively short loops that are scattered throughout the model. They were collected from the space itself; the hum of a server, white noise of the HVAC system, sounds of people coming and going. The curated sounds exist in their own zones and are not ambient per se, but audio pieces relating to or coming from the atrium. There are three “curated” pieces in this version of the model, each with its own collider trigger zone that makes popup text appear on the screen if you’re within it — The text functioning like a title card or film credit. These sounds are also looping, so you might walk into the middle of something. Their relative lengths and independent starts and repeats could make for a combination and sequence of sounds unlikely to ever repeat.

One of the “curated” audio pieces amongst a field of ambient field recordings. This particular piece used a Sphere Collider as a trigger to pop up a title card upon entry. Also shown here are the two overlapping models of the same space—the lighter captured in the beginning of the semester, the darker at the end. Although an effort was made to repeat all the same steps in the second capture, the software interpreted them in vastly different ways. The space itself had changed over this time; furniture had moved, installations popped up, the lighting all differed.

Within this version of the model there are two variations of the atrium space; one photogrammetric model generated at the beginning of the semester and one from the end. The slow change in background reveals one while concealing the other. I could imagine other ways this oscillation between old and new could play out and be refined as more scans got layered into the model. But for now the two models remain static; it’s the context around them that obscures and highlights them.

The background fades from black to white and back, slowly revealing the various captures. Here, the most recent capture is rendered dark.
One of the three curated sound pieces with a pop-up title card.
Another title card from a second curated sound piece.

The models in their current state are perhaps unnecessarily dense which makes navigating them a bit stuttery on a lower-powered computer. The next level of refinement would require a higher level of specificity scope, detail, model dimensions and point cloud density. And maybe it’s not necessary to model the entire space with each update — I could imagine fragments being captured and overlaid into the model with each new installment; a localized detail of where a conversation or performance took place along with its audio artifact counterpart.

The third curated spot, audible from a certain corner of the model.

--

--