Good Boy, Sammy (final)

Good Boy, Sammy from Angela at ITP on Vimeo.

Good Boy, Sammy is an interactive hologram experience, where the user can interact with a holographic dog (holo-dog) and get him to do tricks on command.

BACKSTORY
From 1998-2015, Sammy was a toy poodle who lived an enchanted life of travel, beef spareribs and a king-sized bed. In his 17 years, he learned quite the array of tricks, most of which were captured eternally on mp4 in a 34-second video taken back in 2009 on a Sony Bloggie.

This project brings Sammy (back) to life, giving the user the chance to experience or re-live the splendor of this tiny, magical animal. I mean, how many poodles do you know who can yodel?

THE PROCESS
Video
Creating Good Boy, Sammy began by diving deep into the archives of recorded footage of Sammy (and believe me, there was a lot) to find this particular video of a montage performance of his tricks. I knew this video would be the most comprehensive that existed and luckily found it. However, the footage did not contain all of his tricks, the camera is unsteady, and being from 2009, the quality was a bit messy, so I had some work to do to.

[KGVID]http://www.angelaitp.com/wp-content/uploads/2016/04/colorchange.mp4[/KGVID]

I brought the video into after effects and rotoscoped Sammy, removing the background.  I then created key frames, centering Sammy to stabilize the footage, making the footage seem as if it were filmed from one still location.
screen-shot-2016-03-30-at-2-30-37-pmscreen-shot-2016-03-30-at-2-22-38-pm
Next I divided up the footage into the different tricks and states of Sammy:

1) at rest/stay (base)
2) sit
3) down
4) speak/yodel
5) give me 5/$5

Programming Speech Recognition
A big part of the project is voice recognition. I looked to several options for coding this:

Using ISADORA & Syphon
p5.js javascript libary
Annyang!
PocketSphinx.js
HTML5 Speech Recognition
Using MAX MSP
op.recognize

After testing with some of these tools, I pushed forward with p5.js as it offered the most dynamic way to import, load and call videos into a javascript sketch.

I prototyped the code without the videos, to have the voice recognition react with color changing visuals. The commands:
sit, lie down, speak, yodel, 5, five dollars

 

http://angelaitp.com/PROJECTS/GoodBoySammyShapes/
With this working, it was time to pull in the videos in and make the sketch interactive.

After much testing and prodding I was able to get a working sketch with videos, but they would not appear, yet they looked like they were appearing on the canvas. I consulted p5.js all-star,  Marc Abbey, for help. We discovered that the videos were too large (I hadn’t made them all smaller sized yet). He inspired me to continue to work on the code and he also had a great idea to add or statements (||) for the voice recognition, to make it pick up for words. For example when the word “sit” is said, the recognition often hears “set”, so he added that in. I will do this for more words.

I got it to a point where the videos transition based on command, but they continue to loop behind each other. So when one replaces the other, the other is still running. If you say speak, it will still speak when he is giving five or sitting. Also, it gets stuck at “yodel” sometimes. I tested and tested with play(), hide(), stop(), whiles, ifs, etc, but could not get the videos to work how I needed them to. After exhausting my knowledge of javascript, I reached back out to Marc, who dove into the voice recognition library and showed me how to incorporate the “._onended” code and turn it into a function to hide and control the videos. We also used the pause() function which worked much better than stop() had.

And finally it was time to put all of it together!

 


PUTTING IT ALL TOGETHER

The set-up for Good Boy, Sammy included:

projector
metal dog cage with bed
acrylic sheet
hologram film/sheet
mac mini
microphone

I uploaded the sketch onto a mac mini and set the mac mini under the front of the bed inside the cage. I used an external microphone to pic up the sound close to where the user would interact with the hologram. I laser cut the acrylic sheet to fit snug inside of the center of the cage, and mounted the hologram film onto the acrylic sheet. I set up the projector for rear projection.

IMG_6086 IMG_6087 IMG_6088IMG_6085

 

 

 

 

 

 

 

 

 

 

Initially I used Syphon through Isadora, but experienced the sketch freezing too often. So with the suggestion of my teacher Gabe Barcia Colombo, I put the browser in full screen for the best results. In order to restart the sketch while still in full screen projection, and to be able to click “allow” for microphone usage in the browser, I taped a mouse to the floor to stabilize its location for accurate clicking and used a wireless keyboard to refresh the browser.

And just like that, Sammy was reborn as a holo-dog.

Good Boy, Sammy – In Action
Even in his rebirth, Sammy still finds a way to talk back.

http://www.angelaitp.com/wp-content/uploads/2016/04/InClass.mov

日本語でのビデオ

Good Boy, Sammy (いい子、サミーくん) from Angela at ITP on Vimeo.

Advertisements

Trat – bluetooth and sound

After almost everything got erased, I fortunately, found a backup of the animation I have saved and had to recreate all the other things such as the videos the Max patch and the Arduino code. That experience thought me a big lesson to always backup my stuff, but also that it take less time to do something on the second time.

Here is the result:

 

One of the main improvements from last week was to wirelessly connect between the laser break-beam that lives in the trash can and the Max patch that is on the computer. At first, I tried to use RedBear but I could not connect it to the Arduino. Then I switched to a BlueFruit to send the outputs to the computer- if the laser broke or not and from there to the max patch.

I used Adafruit’s guidelines to hook up the BlueFruit:

adafruit_products_1588arduino_LRG
Adafruit’s Bluefruit wiring

 

IMG_5383
My wiring (Bluefruit and laser break-beam)

I did not add anything special to the Arduino code, but I did do the following steps:

  1. Pair the computer to the EZ-Link
  2. Change the Arduino port to “/dev/cu.AdafruitEZ-Link4c98-SPP
  3. Upload the Arduino code (after uploading the code through Bluefruit I could not open the Serial Monitor of the Arduino).
  4. close Arduino and open Max

Note: when the Arduino code is being uploaded both Red light and Blue light on the Bluefruit should flicker!!

Arduino code:

int laser = 2;
int led = 3;

int button = 0;
int count = 0;

void setup() {
Serial.begin (9600);

pinMode(laser, INPUT_PULLUP);
pinMode(led, OUTPUT);
Serial.println(button);
}

void loop() {
int val = digitalRead(laser);
if (val == 1 && button == 0) {
digitalWrite(led, HIGH);

button = 1;
count = -1;

Serial.println(button);

}
else if (val == 0 && count >= 1000 && button == 1) {

digitalWrite(led, LOW);

button = 0;
count = 0;
Serial.println(button);
}

count++;
delay(5);

// Serial.println(count);

}

Good Boy, Sammy (In Development II)

Good Boy, Sammy has come a long way this week, with a ready-to-go prototype. #holoSam

Good Boy, Sammy: Working Prototype from Angela at ITP on Vimeo.

Video:
I finally finished the rotor-scoping of the low-quality, high motion, no still, 2009 video of Sammy doing tricks. After rotor-scoping, I pulled the video back in to After Effects as png files, not video, and centered the image as it went through, so the visual of Sammy would be stable and centered.

I then cut the videos into smaller segments based on tricks/commands, starting with the default base, where the footage will loop back to when no commands are given.

Code:
It took some sleuthing to find the best way to use voice recognition in a sketch or patch. I explored Annyang, and built an HTML sketch + java script. I tried several different versions of the sketch in HTML. I was able to load a video. It was here that I learned that my videos were too large and decided to try with smaller and original to test that out. However, I was not able to get the most out of the code at this point, unable to control the HTML and the embedded javascript. So I decided to look elsewhere during this testing phase.

I discovered that p5js had a new voice recognition library, which I was able to plug into a sketch and build around it. To protoype the code without videos, I had the voice recognition react to color changing visuals. The commands:
sit, lie down, speak, yodel, 5, five dollars

http://angelaitp.com/PROJECTS/GoodBoySammyShapes

After much testing and prodding I was able to get a working sketch with videos, but they would not appear, yet they looked like they were appearing on the canvas. So, to Marc Abbey for help/Code-sultation. We discovered that the videos were too large (I hadn’t made them all smaller sized yet). He inspired me to continue to work on the code and he also had a great idea to add or statements (||) for the voice recognition, to make it pick up for words. For example when the word “sit” is said, the recognition often hears “set”, so he added that in. I will do this for more words.

I got it to a point where the videos transition based on command, but they continue to loop behind each other. So when one replaces the other, the other is still running. If you say speak, it will still speak when he is giving five or sitting. Also, it gets stuck at “yodel” sometimes.

Code Excerpt:

if (mostrecentword.indexOf(“dollars”) !== -1 || mostrecentword.indexOf(“five”) !== -1) {
if (currentSeconds <= 3) {
five.show();
five.loop();
sit.hide();
liedown.hide();
yodel.hide();
speak.hide();
base.hide();
} else if (currentSeconds – seconds > 3) {
five.hide();
sit.hide();
liedown.hide();
yodel.hide();
speak.hide();
base.loop();
base.show();
}
http://angelaitp.com/PROJECTS/GoodBoySammyTest/

I will play around more with this to have it more seamless.
TO DO:
When command is made, the video only plays once, then loops back to the “base” footage.

 

Projection Mapping & Sound:
I used syphoner and server client and brought this in to Isadora software for projection mapping. I ran into trouble with the video playingout of the projector without freezing.  For some reason it freezes. And there does not seem to be a way to code for syphon within p5. Gabe suggested that I just play the sketch in full screen in my browser in presentation mode and adjust the physical projector. This worked.

The next issue I need to tackle is the sound and mic. If the mic is on the computer it is too far away from Sammy. So I connected an external microphone to the computer and slipped it into the cage underneath the bed. The sound picked up much better, however, no sound came out of the computer speakers. This needs to get resolved.

Another to-do is to figure out how to hook up the mac mini.

 

Trat (Trash-Rat): Update

 

This week started with buying the perfect huge trash can from Brooklin and caring it all the way to ITP.

IMG_5173

 

Animation and Projection:
I projected the animation on the trash can and white wall. My main problem was that I got a white square that showed the projection’s frame. That lead me to try two ways that will allow me to not reveal them.

 

The first thing I tried was to invert the animation. So instead of having a white background, I had white rats.

 

 

Even though it solved the “White Square” problem I felt that this new aesthetic I was getting looked to computed. Therefore, I chose the second solution, I created a spotlight illusion as if the projection is a street lamp lighting on the trash can.

 

Interaction (sensor testing and Max patch):

I used a laser break-beam sensor which works well but in a very small sensing field, anly if you through garbge right in the middle of the trash can it will activate the running rats. Using m

The Max patch with Syphon objects that enable to serial connect between Max and Madmapper.
trash_patch

 

Teat this Trash!

 

 

 

Next steps:
– Fix the Max patch and the Arduino code (make them more efficient).
– Expand the sensor’s sensing field either by using mirrors or using a proximity sensor.
– Make it wireless by using a Bluetooth dongle and a battery as the power supply.
– Add sound to the animation.
– Maybe paint the trash can into white.
– Improve the resolution of the projection.