We strain our eyes, cramp our necks, and destroy our hands trying to interact with computers on their terms. At the extreme, we strap on devices and weigh ourselves down with cables trying to re-create a sense of place inside the machine, while cutting ourselves off from the world and people around us.
The alternative is to make the real environment responsive to our actions. When computers start to possess the same perceptual competencies that we use to communicate and interact, amazing things start to happen: computers disappear into the environment, and the things we do naturally suddenly become the interface. Interactions with the environment that were previously only meaningful to us are now also meaningful to the environment.
My doctoral work has involved advancing the state-of-the-art in perceptual technology to provide deeper perceptual sophistication for computer interfaces. This has included both significant work on computer vision technologies and work on a large number of human-computer interface systems utilizing those technologies, as well as speech perception, wearable technology, and other sensing modalities.
The Dyna system, which is the practical embodiment of my thesis work, combines sophisticated models of the dynamics and kinematics of the human body with real-time statistical vision techniques to create a system that can recover from ambiguities in the observations that cause similar systems to fail: making perceptual interfaces applicable to a wider audience. At the same time, the intimate combination of observations and context create new representations that provide a powerful advantage over the ad hoc features sets previously employed in understanding human motion.