What kind of apps could be created for a smartphone that was optimized to detect the three-dimensional space around it? That’s what Google wants to find out.
The Advanced Technology and Projects group at Google issued an invitation on Thursday for developers’ proposals for applications built for this experimental Project Tango device. The company said it has 200 prototype dev kits, to be distributed by the middle of next month, for “partners who will push the technology forward and build great user experiences on top of this platform.”
The Tango phone utilizes custom hardware and that takes a quarter of a million measurements of 3D space every second. It uses a specialized depth sensor, motion tracking camera, and vision processing system that complement the standard gyroscope. Some have compared the prototype to a portable version of ’s popular Kinect controller, which can read hand gestures in space.
In a posting on the Project Tango blog, the company noted that visual clues are used by humans on a daily basis to navigate the world. “The goal of Project Tango,” it said, “is to give devices a human-scale understanding of space and motion.”
Google said its Tango team has been working with universities, research labs, and industrial partners in nine countries over the past year to cull research in robotics and computer vision, all for the purpose of “concentrating that technology into a unique mobile phone."
The Android-based prototype phone has a five-inch display, and its sensors and software update its position and orientation in real-time to create a single 3D model of the surrounding space. Development APIs help provide positioning, orientation and depth data, and the Android apps being run have been created in Java, C and C++ and the Unity Game Engine.
“These experimental devices,” Google noted on the blog, “are intended only for the adventurous and are not a final shipping product.”
Possible use cases suggested by the company include knowing the exact dimensions of a home before going furniture shopping, or directions that could continue inside a large building to the exact office.
Visually-impaired users might be able to use the device for walking around, or shoppers could be led to the exact spot where a specific product lived on the shelves of a super-size store. There are also, of course, countless gaming or simulation possibilities.
Avi Greengart, an analyst with industry research firm Current Analysis, told us that this kind of technology could become part of a toolset that one might buy as an add-on, if Google didn’t intend to release a line of such devices. On the other hand, he said, if it proved popular some or all of the technology could become part of a smartphone’s standard tools.
While the Google use cases all seemed possible, Greengart discounted the idea that this Kinect-like capability could be used to allow a more complete navigation of a smartphone via in-the-air gestures. He cited issues with current gestural technology, such as Leap Motion’s, including gesturing in a confined space, arm fatigue and a lack of standardized motions.