Tiarnan Mathers & Trushita Yadav
Arch 702B Spring 2023
Professor Sandra Manninger
MS in Architecture, Computational Technologies
New York Institute of Technology

Ai{n}u Interaction
How can our Sony Aibo (Ai{n}u) interact with people throughout the exhibition?
The goal of our project is to allow Ai{n}u to explore sections of the museum containing obstacles and interact with visitors.

Ai{n}u Explores
Ai{n}u’s programming has a “patrol mode” built into it where Ai{n}u patrols a designated area in search of specific people. Once Ai{n}u finds said person, Ai{n}u will take a picture of them. Ai{n}u also has the ability to generate a plan of the space that it explores.



Displaying Ai{n}u’s Images
We anticipate having a screen in the Venice Biennale exhibition dedicated to displaying the results of Ai{n}u’s exploration.

Ai Emotion
In Ai{n}u’s live feed camera, users can visualize what Ai{n}u feels about a person or place based on emojis.


Midjourney Visualizations
Midjourney is capable of creating images from mere emojis (such as the image below).

Midjourney Outputs
Theoretically, Midjourney can blend images of people with images generated from emojis that are based on Ai{n}us emotions in the live feed camera. The results of these images can then be projected on a screen. The following equation is a sample of the Midjourney sequence.

The first image is that of a person generated from Midjourney (placeholder for an image taken by Ai{n}u). The second image is a sample of what Midjourney visualized from the ❤ emoji taken from the live feed camera. These two images were blended together to generate the final image:

Manipulating A Space
Manipulating People
Final Outputs




Next Steps
The next steps of our project is to automate this process in an application and allow viewers at the Venice Biennale to view their images in real time.