Latest Entries »

This content is password protected. To view it please enter your password below:

Skinput uses a bio-acoustic sensing array coupled with a wrist-mounted pico-projector to turn your skin into a touch-screen.

Confused? Don’t be. It’s amazingly simple.

Researchers at Carnegie Mellon University, along with Microsoft’s research lab, have come up with a way to use the skin of your arm (or any other part of your body) to act as a display and an input device, without actually implanting anything weird into you. It consists of two parts. A tiny projector beams the image onto your skin. Tapping the “buttons” causes ripples to run through your skin and bones.

These waves change depending on where you tap, as they run through bone, soft tissues and the like. Special software analyzes these waves, and uses the information to work out exactly where you touched, just as if you were tapping an iPhone screen. Specific locations can be mapped to certain functions: in the video you see somebody playing Tetris by tapping their fingers.

Both sensor and projector can be put into the same armband, but the display is unnecessary: Another use is to tap the tips of the fingers to control an MP3 player, a task simple enough to rely on the user’s memory.

Bodily interface (Image: ACM)

Various tap-based interfaces are possible, and the thing that impresses us about all of them is the simplicity for the user. We worry a little though. We already mistake people muttering into their Bluetooth headsets for crazy people who talk to themselves. Now we have to distinguish joggers skipping tracks on their iPods from drug-fried nut-jobs who twitch and scratch at imaginary insects crawling over their flesh.

Thanks, researchers.

“Project Gustav is a realistic painting application that enables artists to become immersed in the digital painting experience.”  Gustav combines a natural user interface with natural media simulation and brush modeling to deliver an easy-to-use, intuitive, and flexible tool for experienced artists and novices alike.

Digital painting with a brush
Project Gustav enables lifelike digital painting with a brush …

Govindaraju, a senior scientist on the Applications Incubation team within Microsoft Research’s eXtreme Computing Group, is understandably proud of a project that replicates the real-life experience of painting on a digital canvas.

“Project Gustav uses an elegant natural media-simulation algorithm to mix, smear, and allow users to interact with paint on the canvas,” he says. “A novel, 3-D, deformable brush model takes advantage of the physical input parameters—such as area, pressure, and orientation—offered by recent stylus- and touch-input hardware.”

New, high-powered GPUs provide the capabilities.

Digital painting with a fingertip
… or, with a fingertip.

“The team @ Microsoft has developed new algorithms for simulation of art media. These algorithms are carefully designed to leverage the power of recent graphics processors to deliver a new level of realism in digital painting,” Govindaraju says. “In addition, the team has developed new techniques for simulating 3-D brush dynamics and modeling the subtle interactions between brush, canvas, and the bristles of a bunch.”

The work on Project Gustav removes much of the user-interface hurdles posed by previous painting software, enabling users to concentrate on the creative process rather than having to concern themselves with computer distractions.

“An artist who usually shuns digital painting programs will have no problem getting right into Project Gustav.” Gustav lets you focus on the task of painting or sketching, rather than on wrangling with a complex UI, as in many digital media programs. Load it up on a notebook computer and take it with you for sketching wherever you are.

“A novice interested in painting as a hobby can get going right away and obtain an almost real-life experience without purchasing art materials. Children can draw all they want and still get the feel of real-world drawing and doodling—without all the cleanup.

With the natural interaction metaphors, support for advanced input devices, and realistic modeling of paint media and painting tools, it concludes, “you can really lose yourself in the program.”

“One artist got so immersed in the demo that he tried to smear the chalk on the screen with a licked finger, forgetting that it was not a real canvas.”

‘SixthSense’ is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information.


We’ve evolved over millions of years to sense the world around us. When we encounter something, someone or some place, we use our five natural senses to perceive information about it; that information helps us make decisions and chose the right actions to take. But arguably the most useful information that can help us make the right decision is not naturally perceivable with our five senses, namely the data, information and knowledge that mankind has accumulated about everything and which is increasingly all available online. Although the miniaturization of computing devices allows us to carry computers in our pockets, keeping us continually connected to the digital world, there is no link between our digital devices and our interactions with the physical world. Information is confined traditionally on paper or digitally on a screen. SixthSense bridges this gap, bringing intangible, digital information out into the tangible world, andallowing us to interact with this information via natural hand gestures. ‘SixthSense’ frees information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer.

The SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user’s hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tip of the user’s fingers using simple computer-vision techniques. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. The maximum number of tracked fingers is only constrained by the number of unique fiducials, thus SixthSense also supports multi-touch and multi-user interaction.

The SixthSense prototype implements several applications that demonstrate the usefulness, viability and flexibility of the system. The map application lets the user navigate a map displayed on a nearby surface using hand gestures, similar to gestures supported by Multi-Touch based systems, letting the user zoom in, zoom out or pan using intuitive hand movements. The drawing application lets the user draw on any surface by tracking the fingertip movements of the user’s index finger. SixthSense also recognizes user’s freehand gestures (postures). For example, the SixthSense system implements a gestural camera that takes photos of the scene the user is looking at by detecting the ‘framing’ gesture. The user can stop by any surface or wall and flick through the photos he/she has taken. SixthSense also lets the user draw icons or symbols in the air using the movement of the index finger and recognizes those symbols as interaction instructions. For example, drawing a magnifying glass symbol takes the user to the map application or drawing an ‘@’ symbol lets the user check his mail. The SixthSense system also augments physical objects the user is interacting with by projecting more information about these objects projected on them. For example, a newspaper can show live video news or dynamic information can be provided on a regular piece of paper. The gesture of drawing a circle on the user’s wrist projects an analog watch.

The current prototype system costs approximate $350 to build.

PICTURES

VIDEOS

My First Post!

Hello everybody. This is the first post in my new blog and I am excited ..

A few days ago a friend of mine and I had a conversation about contributing to the technical community and how we both have not contributed much to the general knowledge base in the last several years.  As a result of the I’ve decided to start a blog of random related activities I perform, both to catalog and also to help those who might encounter similar problems to mine.

In the future I will use this blog to talk about stuff that interests me.

Thanks for making this extremely pleasant, I hope the rest of my experience continues to be as simple and intuitive.

Thanks for coming!

Happy New Year !!