some years ago Oblong amazed the whole world with their Minority Report style interactions – actually Oblong’s founder John Underkoffler consulted for the movie interaction and digital technology (have you seen his talk at TED Conference?).
NOt much further in time Microsoft unveiled Project Natal to be received with a mix of skepticism and amazement by the public:
that’ was until it’s commercial output arrived: Kinect.
Kinect was since the beginning a game changer, not only by the technology it featured and its possibilities, its really low price but most of all by the unexpected movement it generated:
Coders and creatives gathered to break it and access its data, prizes were awarded to the first hacking the Kinect, whole ‘hacker’ and creative computing communities and coders dived into it’s box until they got the hang of it and with that a million new applications (way more and in a way broader spectrum than MicroSoft could ever conceive).
Without delay developers started to release and open source applications and unexpected uses of this technology, hacking away skeleton data (bloom sense osceleton)
Only one year after Art&&Code conference gathered some dozens of creative coders, designers and developers to show off their accomplishments and exchange knowledge and ideas on the Kinect based art and interaction.
After that the market for depth cameras and gestural hardware devices just continued to grow – Asus and Panasonic among others got into the game.
Asus presented a smaller version of a depth sense camera (xtion) and panasonic got some models out that could deal way better and work with direct sunlight (a limitation for so many other depth cameras due to the use of IR – which the sun light also has).
Together with each release more and novel uses of these devices appeared. From new ways of making movies to gaming interfaces to the rise of gestural interaction beyond the multitouch screen.
In the past months new game changers appeared, Motion Leap while still only in the hands of a few developers is already amazing and impressing audiences for its hand and finger tracking abilities. we’re looking forward to it.
Intel joined the game last year pioneering sensor fusion as a developers package, offering a multimodal sensing kit and SDK.
Intel’s Perceptual Computing group launched a camera kit allowing for near range interaction and hand and finger detection , in our view the main feature for us is actually the wide range of sensing capabilities offered by the SDK, namely the camera based sensing featuring (hand and finger tracking as said before, face recognition and features tracking and 2d and 3d object recognition), speech recognition and eye tracking! (have you seen our hand and finger tracking audio synth demo?)
Even more incredible is that while still on it’s Beta version Intel launched its SDK open to the developer and creative computing worlds, featuring demos and example projects for all major creative coding frameworks (OpenFrameworks, Processing, Unity and Cinder) complemented with Creative Interactive Gesture Camera Developer Kit, a Creative Labs branded camera featuring 720p rgb and QVGA depth resolutions, dual array microphones.
In the meantime, PrimeSense (in case you don;t know – they’re the kinect inventors) did not stood still and already announced Capri, a very small version of their depth sensing technology ready to be embedded in portable devices.
The future? we have no idea! but we’re excited about what our interactions with digital systems might be and keep looking for novelties! such as Microsoft’s SoundWave, a technology that will be able to capture your gestures without using a camera: