Monday, April 26, 2010
YDreams has been working on natural user interfaces (NUI) for around 9 years. We’ve been using primarily web-cameras for the detection of people and objects, as these are non-intrusive and are a very rich multi-vector sensor.
In the beginning each new application was a copy-paste of source from the previous projects plus, all the customization code and a few more features. As you can imagine, maintenance was a nightmare. From very early on, we felt the need to have a reusable development framework. It would allow us to take less time developing the applications, make them more robust and leave us time to keep innovating. As a result, we have successfully deployed hundreds of applications for our customers, which you can find in museums, stores, events and movie theaters.
The resulting framework also allows us to share our knowledge across the development process. We have great people that are very good at different areas of expertise but we cannot have all of them working on every project. Their knowledge is added to the framework to be used by others. It includes multi-threading, image processing, tracking, 3D real-time graphics and physics, artificial intelligence, Free Frame plugins, Flash integration into 3D graphics, USB, FireWire and IP (Ethernet) video cameras, Microsoft Surface support, etc.
The platform is used in almost all our interactive applications and recently was used on the robots we created for Santander. The platform allowed us to have a 3D simulation of their behaviors even before we had the physical robots. The exact same code that was used in the simulator is now running in the robots themselves.
We recently partnered with Canesta and added support for their time-of-flight depth-sensing video cameras. These cameras can detect the distance of the real objects to the camera.
We already had interactive applications combining real and virtual in real-time. Now it is also registered in 3D, fully complying with the augmented reality definition by Azuma.
This is just one more feature added to many others in the framework. Creating the demo was very simple. Its objective was to show the capabilities of the camera and also perform the first usability tests. We wanted to know how users would perceive the “invisible” 3D objects and what type of interactions are possible. It has been a big success with all the users.
Visit our booth at ARE2010 to try this demo yourself and get more details on our framework.