MISTKONECT : MIT Media lab Design Innovation Workshop 2014

A low cost holographic screen cum mist interface to enhance visual display by embedding virtual objects in real environment and facilitating gesture interactions using kinect.

screen-shot-2014-02-08-at-11-06-34-pm

 

DESIGN PROCESS

IDEATION
Multiple brainstorming sessions were done over a span of 24 hours to come up with several ideas. Each group was then formed based on their own liking for a particular idea.

Previous Image
Next Image

 

Vision
Have you ever wondered what it would be like to have a virtual presence of the person with whom you are talking, like the way Sirius Black used to talk with Harry Potter in the fireplace of the Gryffindor common room using the Floo Network? Or control an interface in thin air like Tony Stark?Well, if you consider that as magic, then this is surely magical! Project aims at creating a 3D projection of the person you are talking (over long distance) on a mist screen.

Why a mist screen?
Mist has always been related metaphorically to magic. the word mystical has been basically derived from the word ‘mist’ and is often used to describe things that we do not understand or find curious. since the track is called “magical interface”, we found it highly pertinent to go ahead with the project called “mistkonect” which basically uses kinect to project the image of the person on the other end of the line on a mist screen. we can also make the screen interactive to detect our gestures or write on screen.

Use Cases :
1) Fun Interactive Communication: Imagine rather than having skype chat on a 2 d screen, one could actually see projection of person right in front of them, interact with them real time using gestures and even touch them!
2) Great learning tool: Children can learn to basic shapes, alphabets, numbers or paint on an Invisible mist screen. Would it not be the coolest black board to have!?!

DESIGN ALTERNATIVES and PROTOTYPING
The major challenge was to build the mist screen, thus we started by exploring various design alternatives for the same.

Attempt 1 : Ultrasonic fog machine
Ultrasonic Foggers – Working
A piezoelectric transducer (resonating frequency 1.6MHz) produces high-energy vibrations which cause the water to turn into fog. The transducer creates oscillation of high frequency on the water surface. This causes the water to turn into vapor. High pressure compression waves are created on the water surface, causing vapor molecules to be released into air. Water particles in the fog are of a size less than 5 microns. Ultrasonic foggers cannot be run dry; they need sufficient amount of water to function.

Previous Image
Next Image

The water needs to be deionized or distilled. A built-in sensor detects the presence of water and activates the transducer plate.
The transducer vibrates causing the water to turn into droplets, which vaporize to turn into fog particles. Unlike thermal or heat-based foggers, the fog generated by an ultrasonic fogger is cold and wet. These foggers are small devices. They have an external AC/DC adapter for power supply. They are cost-effective.

Failure of Fog Machine

The fog machine we received was small and the fog being produced was very low on the intensity. It was not possible to project image on it.

Alternative Ideas – HACKING our way! 

We got down to some brainstorming and “Jugaad”(indian word for hack)! Following ideas came up, as how we could possibly make an interactive 3d projection:

Previous Image
Next Image

1) Steam boiling water: Even though it was unrestrained, It showed good projection for laser lights. When we tried projecting image on it, results were not so good.

2) Water with glycerin: Professional fog machines used at party, use mixture of glycerin mixed in right proportion.
Very Thicker Smoke: 30% Glycerin | 70% Water
Medium Thick Smoke: 20% Glycerin | 80% Water
Less Thick Smoke: 15% Glycerin | 85% Water
We did make solution with 30% glycerin and 70% water expecting fog to be denser, the results were still not sufficient enough to produce the required fog.

3) Water with milk: milk is added to hookah to make its smoke concentrated. So at 1 am in the midnight, we procured milk and tried seeing if the steam produced was dense enough. Fail again.

4) Water curtain : Since we failed to produce fog, we tried if we could possibly create a water curtain.
We experimented with water hose with a horizontal slit and passed water through it from a normal tap. We realized that if we could make the water pressure better we might possibly get a curtain. Next day when we got the water pump, but its pressure was so dismal that we had to drop the idea.

5) Water curtain on glass : Since water pump failed, we though we might try having the water flow over glass plate for the required effect. This was ok, but not that great since water was cohesive with glass plate and tended to form streams very quickly in the flow.

6)Water Sprinkler : we dabbled with this idea, since we read Disney uses water sprinklers to create screen. We had major resource crunch since this it would require an outdoor and dark environment.

7) Dry ice in boiling water: Procuring dry ice was very difficult task in mumbai, but we finally managed to get it, which gave us hope back. When dry ice was put in water the effect was very good, that we could practically think of creating a fog screen. To increase the intensity we boiled the water. Voila ! the result was amazing, all we had to do now was to create a laminar flow.

8)Designed less than $2 fog machine :  we set down to design our own fog machine. the protype was drawn on paper as shown in pics

Previous Image
Next Image

 

TECHNOLOGY : CODING KINECT 

Previous Image
Next Image

To make it look like Iron Man’s Tony Stark we had to include Kinect into our arsenal. To be able to create a wow factor, we had to incorporate custom gestures into our project. Thankfully, a recently updated Kinect Toolbox library came to our rescue. A gesture is represented as a union of some defined states and each state is described in terms of the relative position of the skeleton joints. For example, in the zoom in action that we implemented, the initial state is when both the hand palm joints are close together in front of the abdomen and the final state is when they are well separated from each other. The initial state translates to the following skeleton description –

  • Z-coordinate of the right and the left hand palm joint is less than the Z coordinate of the abdomen joint (i.e. they are in front of the abdomen)
  • Y-coordinate of the right and the left hand palm joint is less than the Y coordinate of Head joint but greater than the Y coordinate of the hip joint (i.e. they are positioned in between head and hip joints)
  • X- coordinate of the right and the left hand palm joint is within a given threshold and is less than the X coordinate of the respective hand elbow(i.e. they are held close to each other and are positioned in between the two elbows)

Similar is the description of the final state where the Z coordinate and the Y coordinate description is the same as above, only the X-coordinate description changes so as to denote that the hand joints are now well separated and are to the right and left of the right and left hand respectively. The overall code compiles like a gesture in which initially both the palms are close to each other and then you move them in opposite directions. The GestureFactory of Kinect Toolbox executes itself at regular intervals and checks for the presence of a gesture which has been defined in its list.

To connect these gestures with the actual manipulation of the image and the displayed objects, we worked on with the image manipulation in C#. To enlarge it, we created a new BitmapImage object and copied the original image data onto it. Then we manipulated with the width and the height of the new BitmapImage using simple mathematical operations. This new image was connected to the completion of the gesture recognition event of Kinect, i.e. when the GestureFactory recognizes the zoom in event then it would trigger a function which would display the new Bitmap image in the placeholder of the original image. However, there was one issue with this approach. The human gestures are continuous, they are not like impulses however the image change that computer does is instantaneous. To give a human like feeling, we used Animation option in C#, which would allow the change in image (or zooming in or out) to take place over a course of time so as to give it a continuous feeling.

Similar to this zoom in option, we also deployed the zoom out, image change and swipe option. Image change was notified by the push hand gesture of the right hand, whereas, the swipe option was indicated by a wave gesture of the right hand.

On the D(demo) Day, these gesture control did amaze the audience, however, we noticed that the gesture recognition wasn’t that sound when multiple skeletons were detected. Resolving multiple skeletons and incorporating more new generic gestures would be our work ahead with respect to Kinect coding.

 

 

Comments are closed.