If you can’t imagine working on your computer without your mouse and keyboard, you might be in for a surprise. Major technology leaders and gutsy startups are exploring the possibilities of turning the hands and limbs of computer users into digital controllers. Simply put, this new type of interface uses the operator’s gestures to manage computers (and other devices) rather than by mouse-clicking, typing or touching a display screen. Where is this headed and who will benefit the most? Initial response suggests that gesture-control technology is attracting interest across many business sectors.
If you think about it, a version of this technology has been with us since Microsoft launched software for gaming consoles. Wii games track large body movements used in dance, sports and exercise programs. Medical science has been quick to use similar applications. With features that track the movement of an entire body, Microsoft’s Kinect has been around for about six months, and planned updates will allow users to pinpoint specific areas like joints or the head and neck. Kinect allows surgeons to view medical imagery in 3D while operating without touching anything. Since this technology began to appear in 2010, companies like Google and Apple have heated up the race to bring gesture-control applications to the personal computer.
Industry experts believe that this will not only make it easier for computer users to perform various tasks, but it will also enable them to create accurate 3D models. In addition to doctors and physical therapists, lots of professionals will benefit from this technology. If these applications become the wave of the future, the shift will affect the manufacture of televisions, computers, hand-held devices and other types of computer hardware. The larger companies have not unveiled their strategies, but experts believe that some well-known names have already secured patented gesture-based inventions and are looking at ways to link this to voice-activation technology. Google recently provided a glimpse of one of its prototypes involving specially modified glasses that combine voice recognition, gestures and sensors to give users information about their surroundings.
An up-and-comer in the business, Leap Motion recently unveiled its hand-gesture technology. In devising a way to create and manipulate 3D models without the complications that came with a mouse or touch screen, Leap’s chief technology officer created tools that allow users to draw a 3D image within fractions of millimeters. The final product that Leap Motion introduced at the end of May was the result of four years of research and prototype development. Skeptics who wondered how the Leap Motion team managed to deliver such high fidelity were informed that this new method of motion sensing uses infrared LEDs and cameras to track the user’s fingers. Further technological insight won’t be available until Leap’s many patent applications are finalized.
It might be a little while before Leap Motion’s gesture-based controls will be embedded in computers, laptops, tablets and smart phones, but a free-standing version will be available in the coming months. At the end of May, the company started taking orders for the device ($69.99), which aside from being inexpensive is also fast and small enough to fit easily in the palm of the hand. It uses 1 percent to 2 percent of a CPU’s capabilities, and is compatible with any machine that has touch drivers (touch screens or track pads).
Watch for this product and others like it. Not all our interactions with computers require gesture-aided technology, but new interfaces like this often spur creativity throughout the technology sector.