Assignment IV | Final Project | Carlie Yutong ZHANG
The story of SELF CHURCH:
If I fall into the infinite darkness of self-hypnosis
If I am imprisoned in the curse from myself
If I am driven by my own order
Will I fly through the singularity,
and get to the horizon on the other side?
Will you enter my self-church uninvited?
The dogma will then be broken
Will I be roused up?
Assignment III | Video Installation | Carlie Yutong ZHANG
ORBIT TRAINING is a real-time interactive training designed for future life in outer space to help people get better understanding of how orbit works so that they can keep themselves in the orbit of those they care about. The installation creates an illusion by capturing people’s real shadow in front of the light, then making it always moving from right to left and generating the new shadow. The closer people get to the wall, the faster the shadows is moving in the invisible orbit.
An object in motion will stay in motion unless something pushes or pulls on it. This statement is called Newton’s first law of motion. Without gravity, an Earth-orbiting satellite would go off into space along a straight line. With gravity, it is pulled back toward Earth. An object’s momentum and the force of gravity have to be balanced for an orbit to happen. If the forward momentum of one object is too great, it will speed past and not enter into orbit. If momentum is too small, the object will be pulled down and crash. Actually, all human beings have gravity that we never noticed. We also change our speed to keep the orbit from each other. In ORBIT TRAINING, audiences become an Earth without even noticing it at first. The moon changes its shape because of the shadow, changes its speed because of the audience’s existence in the space.
Our team: Manning Qu, Jingru Yin.
Escape the No.4 is an AR/VR room escaping experience, it will be setup in a corner of a classroom with a basic desk and few lockers. The background story bases on a creepy mental hospital which has ulterior secret that no one knows. The experience will be solo – one audience at a time. The audience will play a role of someone being framed as a psychopath and brought here, they need to find clues with AR(we’ll provide google cardboard) in the physical space to escape in a short time(approximately 5 mins). The clues are gonna be target pictures hided in ordinary objects for our APP to trigger images/animations/videos. The experience will also include few performance to bring audience into context.
full documentation coming…
Player will be ask to read a brief intro before entering the space, the performer would dress up as a doctor and invite the ‘patient’ into the space which is an office of a mental hospital. The performer will leave after giving tools and first clue with lines. Each time, the player will have 5 minutes to achieve the goal of escaping. The clues will be under a glass, on a chart that need to be fill following instruction, inside a locker, and hided in pill bottle or hanged pictures. While finding each clue, the player will also be exposed to the story with supporting props. Finally, the player will find the tool to cut the bar bracelet which declares the victory of the game. Each time, one player will be giving 5 mins, they’re welcome to RSVP and come back again to finish the journey if they haven’t done it in first try.
[initial documentation for prototype – full documentation coming]
I wanted to explore whether or not embodied violence would resensitize or desensitize a user to violent actions. This was inspired after viewing a VR exhibit in the Whitney Museum (http://www.newyorker.com/culture/cultural-comment/confronting-the-shocking-virtual-reality-artwork-at-the-whitney-biennial). I was intrigued by the multitude of reactions of the piece, ranging from nervous laughter to absolute disgust. I found myself staying through the entire duration of the piece. Even though it looked real, I knew that it was not. This lead me to wonder what this piece would be like if the user could interact and affect the scene.
In my Embodied Violence project, the user may only continue a violent scene if they act out the same gesture of the violence on the screen. So if Robert Deniro is fighting someone in Raging Bull, the video would pause right before a punch is thrown and only continue if the user watching the film also throws a punch. To detect the gestures, I used a Kinect and programmed simple gestures for the user to enact.
I found four scenes of violence from Raging Bull, Fight Club, Drive, and American History X and edited them together in this same order. I chose these films wanted to find acts of violence that were repetitive, and also fairly simple to gesture (punches and kicks). The order was set because the violence depicted increases film by film. You can view the video here: https://drive.google.com/file/d/0B5fYaAwZcIehbm85UXNZMUhqNmc/view?usp=sharing
Note that in the last film (American History X), Edward Norton’s character commits a heinous crime of violence against another person. Although I would hope that no one would actually continue the film after the movie pauses after this, I knew that that would unfortunately not be possible. So instead of continuing the film and show the violence, I would switch on the infrared feed of the Kinect camera and display that instead. The user would view themselves, committing the violence instead, robbing the user the gratification of seeing the aftermath.
I found this project a bit polarizing for myself to work on. I believe that I am desensitized to violence in works of fiction, so I wanted to kind of fix that. And it kind of worked… at the start. The embodied act of punching and seeing the result of that punch on film was very effective. It got even more effective as the violence increased, making me stop and think about whether or not I should continue kicking this fictional character on the floor.
But after throwing hundreds of punches to debug the code, and seeing the same actors get pummeled over and over again, I found myself back at square one… and I can’t imagine what Edward Norton feels like.
My final projects for Readymades was a piece called “MY PHONE YOUR PHONE”. This is an interactive installation that requires two users to simultaneously connect to a local web server on their phones and navigate to a web page. This page offers a few variations on simplistic animations that feature the words: My Phone Your Phone Mine Ours.
Captive Audience is a final project by Rita Cheng. This was a continuation of the Emotional Readymade assignment; the idea was to create an object that expresses gratitude towards anybody who speaks into it. In summary, it’s a clapping machine.
Everyone knows these days the only way to achieve greatness is by manipulating the news media. Ahead of the Times (ATs) represents a new line of sneakers that will have you balling like the pros. These shoes keep you above it all while misinforming the sheeple trying to hold you back. ATs work by projecting a jumble of todays current headlines behind you as you walk, creating a “laser-like” effect that will shock and awe. Of course, ATs also make it easy to keep up on the haters. Just take a seat, kick your feet up and scroll through headlines at your leisure. Available exclusively at itpreadymades.wordpress.com
Part of the idea in making the projector shoe was to build something where the projection is in the shoe. Everything that we looked at had the electronics on the outside and we wanted to take this farther and embed everything inside, so that it’s one “cohesive” object. To get there, our prototype has taken quite the beating but it works.
What we decided on using was a pair of size 12 men’s hiking sneakers (so everything can fit inside as well as a foot!) a rasberry pi 3 (the pi zero’s were on their way at this point) and a laser beam projector, along with those parts, we needed space for power cord and an hdmi cord and a lipo battery and charger.
Modifying the shoe, meant stripping the padding on the inside and making holes in the heel of the shoe for the projector lens to show. To make the hole, Satbir got pretty close with the drill press and one of John’s personal specialty bits. Securing the shoe was a two person job and Angela helped to clamp the shoe down, then the bit was used to make a hole, and repeated for the second shoe.
From here it was figuring out how all the parts would fit into the shoe. The projector had a specific location to be in, which was the back of the shoe in the heel. We decided that it would be okay if the raspberry pi 3 was on the outside of the shoe. It wouldn’t be the final look of the shoe, since we were getting pi zeros the following week. The battery was placed behind the projector in a small pocket was available. The battery charger was pinned to the outside of the shoe, so the wires could fit. For the next version, A pocket would be made to house the electronics that would go into the shoe. A button was glued into the front of the shoe near where the ball of the foot would be, that button would trigger the next image.
For the second version we used the pi zeros, which really gave us what we wanted. A small enough computer to fit into the shoe without hurting any foot (too much).
For this version, two pockets were sewn up so that the raspberry pi, battery pack and charger, along with some of the wires (coiled up) could fit into the side of the shoe where the foot instep goes, to free up space and make it a bit more comfortable when walking. The wires were exchanged for shorter ones and some cuts were added into the shoes to allow for the wires to poke out and around my ankle area. These modifications helped to walk with more room.
The software for this worked primarily by creating a file full of images and launching a command line app called “feh” to generate a GPIO triggered slideshow. The first step was installing feh. This is billed as a lightweight image viewer for linux systems so it seems appropriate for running a slide show on a low power device like the Pi Zero. Here is a good tutorial on a similar use case. Next we needed some way to trigger the next slide via GIPO input. pikeyd came up as an easy way to do this. This is a keyboard daemon for Raspberry Pi that simulates a specific key press when it receives an input on a specific GPIO pin. Once we configured piKeyd each time we connected GPIO 3 to ground it would simulate a Right arrow key press to change the feh slide. At this point we could get the shoes working with a premade set of slides. The next challenge was updating these slides automatically. We used the newsAPI to pull headlines from English language sources and a python library called Pillow to generate images of the text on a black background. Old headline images were cleared out and replaced by updated headlines each time the python script was run.
We’d like to develop an entire shoe with this technology integrated from the beginning. A lot of the difficulty we had with miniaturization had to do with adapters and cable length Which could easily be eliminated with custom hardware. In addition a purpose built laser projector could dissipate heat and share power management with the computer more efficiently. Designing our own shoe also enables us to make something thats actually comfortable and more durable than a modified shoe could be. We also plan to experiment with different types of projections.