[Books | Journals | Conferences | Talks ]
The Confluence of Vision and Control (with D. Kriegman and A.S. Morse, Eds.). Springer-Verlag , New York, 1998.
Task-Directed Sensor Fusion and Planning. Kluwer Inc , Boston, 1990.
Fast and Globally Convergent Pose Estimation From Video Images (with C.P. Lu and E. Mjolsness) IEEE PAMI, PAMI 22(6): pp. 610-622, 2000. (334K pdf)
What Tasks Can Be Performed with an Uncalibrated Stereo Vision System? (with J. Hespanha, Z.Dodds, and A.S. Morse) The International Journal of Computer Vision, 35(1): pp. 65-85, Nov. 1999. (329K pdf)
Incremental Focus of Attention for Robust Vision-Based Tracking (with K. Toyama), The International Journal of Computer Vision, 35(1): pp. 45-63, Nov. 1999. (424K pdf)
Efficient Region Tracking With Parametric Models of Geometry and Illumination (with P. Belhumeur), IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(10), pp.~1125-1139, 1998. (1.7M pdf)
The XVision System: A General-Purpose Substrate for Portable Real-Time Vision Applications (with K. Toyama). In Computer Vision and Image Understanding 69(1) pp. 23 - 37. (574K pdf).
A Modular System for Robust Hand-Eye Coordination Using Feedback from Stereo Vision. IEEE Transactions on Robotics and Automation. 13(4) pp. 582-595, 1997. (435K pdf)
A Tutorial Introduction to Visual Servo Control (with S. Hutchinson and P. Corke). IEEE Transactions on Robotics and Automation, 12(5) pp. 651-670, 1996. (2.1M pdf)
Online Computation of Exterior Orientation with Application to Hand-Eye Calibration (with C.P. Lu and E. J. Mjolsness). Mathematical and Computer Modeling. 24(5), pp.~121-143, 1996.
Robot Feedback Control Based on Stereo Vision: Towards Calibration-Free Hand-Eye Coordination (with W. Chang and A.S. Morse). IEEE Control Systems Magazine, 15(1), pp. 30-39, 1995. (1M compressed postscript)
Task-Directed Computation of Qualitative Decisions from Sensor Data. IEEE Transactions on Robotics and Automation, 10(4), pp. 415-429, 1994.
Real-Time Vision-Based Robot Localization (with S. Atiya). IEEE Transactions on Robotics and Automation, 9(6), pp. 785-800, 1993.
Computational Methods for Task-Directed Sensor Data Fusion and Sensor Planning (with M. Mintz). International Journal of Robotics Research, 10(4), pp. 285--313, 1991.
Robust Linear Rules for Nonlinear Systems. In J.K. Aggarwal, editor, Multisensor Fusion for Computer Vision, Springer-Verlag, 1993.
Automatic Sensor Search and Positioning for Geometric Tasks (with M.~Mintz). In S. Chen, editor, Recent Advances in Spatial Reasoning, Ablex, 1990.
Selected Conference Papers:
Toward Domain-Independent Navigation: Dynamic Vision and Control
(with D. Kriegman, O. Ben-Shahar, and A. Georghiades). To Appear in
CDC'98.
Joint Probabilistic Techniques for Tracking Multi-Part Objects (with C. Rasmussen). In CVPR'98
Dynamic Sensor Planning in Visual Servoing (with E. Marchand). In the proceedings of the 1998 IEEE International Conference on Robotics and Automation.
What Can be Done With an Uncalibrated Stereo System? (with J. Hespanha and Z. Dodds). In the proceedings of the 1998 IEEE International Conference on Robotics and Automation.
Task Re-Encoding in Vision-Based Control Systems (with W-C. Chang, J. P. Hespanha and A.S. Morse). In the Proceedings of the 1997 IEEE Conference on Decision and Control.
If At First You Don't Succeed .... (with K. Toyama). In the Proceedings of the AAAI Conference on Artificial Intelligence, pp.~3-9, 1997.
A Color Interest Operator for Landmark-based Navigation (with Z. Dodds). Proceedings of the AAAI Conference on Artificial Intelligence, pp.~655-660, 1997.
Image-based Prediction of Landmark Features for Mobile Navigation (with D. Kriegman, E. Yeh and C. Rasmussen). In the Proceedings of the International Conference on Robotics and Automation, pp. 1040-1046, IEEE Computer Society Press, 1997.
Modeling and Control for Mobile Manipulation in Everyday Environments (with W. Feiten, B. Magnussen, J. Bauer and K. Toyama). In the proceedings of the 1997 ISRR. (910K compressed postscript)
Preliminary Results on Grasping With Vision and Touch (with J. Son, R. Howe, and J. Wang). In the 1996 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '96), Nov. 1996. (533K compressed postscript)
Robot Navigation Using Image Sequences (with C. Rasmussen). Proceedings of the AAAI Conference on Artificial Intelligence, pp. 938-943, 1996. (1.0M compressed postscript)
Incremental Focus of Attention for Robust Visual Tracking (with K. Toyama). Proceedings of the 1996 IEEE Conference on Computer Vision and Pattern Recognition, pp. 189-195, 1996. (1.6M compressed postscript).
Real-Time Tracking of Image Regions with Changes in Geometry and Illumination, (with P. Belhumeur) Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 403-410, 1996.
XVision: Combining Image Warping and Geometric Constraints for Fast Visual Tracking (with K. Toyama). Proceedings of the Fourth European Conference on Computer Vision, pp. 507-517, 1996.
SERVOMATIC: A Modular System for Robust Positioning Using Stereo Visual Servoing (with K. Toyama and J. Wang). Proceedings of the International Conference on Robotics and Automation, pp. 2636-2643, 1996. (875K compressed postscript)
A Calibration-Free, Self-Adjusting Stereo Visual Control System (with W.C. Chang and A.S. Morse). Prceedings of the 13th IFAC World Congress, pp. 343-348, 1996. (117K compressed postscript)
A ``Robust'' Convergent Visual Servoing System (with D. Kim, A. Rizzi, D. Koditschek). In Proceedings of the International Conference on Intelligent Robots and Systems, Vol. I, pp. 348-353. 1995.
The ``XVision'' System: A General Purpose Substrate for Real-Time Vision-Based Robotics. In Proceedings of the Workshop on Vision for Robotics, pp. 56--63, 1995.
Calibration-Free Visual Control Using Projective Invariance. In Proceedings of the International Conference on Computer Vision, pp. 1009-1015, 1995. (1.3M compressed postscript)
Feature-Based Visual Servoing and its Application to Telerobotics (with G. Grunwald and G. Hirzinger). In Proceedings of the 1994 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 164--171. IEEE Computer Society Press, Sept. 1994.
Real-Time Feature Tracking and Projective Invariance as a Basis for Hand-Eye Coordination. In Proc. IEEE Conf. on Computer Vision and Image Processing (CVPR), pages 533--539. IEEE Computer Society Press, June 1994.
Gregory D. Hager and Kentaro Toyama
In this article, we describe XVision, a modular, portable framework for visual tracking. XVision is designed to be a programming environment for real-time vision which provides high performance on standard workstations outfitted with a simple digitizer. XVision consists of a small set of image-level tracking primitives, and a framework for combining tracking primitives to form complex tracking systems. Efficiency and robustness are achieved by propagating geometric and temporal constraints to the feature detection level, where image warping and specialized image processing are combined to perform feature detection quickly and robustly.
Over the past several years, we have used XVision to construct several vision-based hand-eye and mobile robotic systems. We describe some of the lessons we have learned from these experiences, and illustrate how useful, robust tracking systems can be constructed by simple combinations of a few basic primitives combined with the appropriate task-specific constraints.
Back to the top.
Gregory D. Hager
These control laws have been integrated into a system that performs tracking and control on a single processor at real-time rates. Experiments with this system have shown that it is extremely accurate, and that it is insensitive to camera calibration error. The system has been applied to a number of example problems, showing that modular, high precision, vision-based motion control is easily achieved with off-the-shelf hardware.
Back to the top.
S. Hutchinson, Gregory D. Hager and P. Corke
Back to the top.
G. Hager, W-C. Chang and A.S. Morse.
Back to the top.
G. Hager
In this article, recent work on projective geometry as applied to vision is used to extend this paradigm in two ways. First, it is shown how results from projective geometry can be used to perform online calibration. Second, results on projective invariance are used to define setpoints for visual control that are independent of viewing location. These ideas are illustrated through a number of examples and have been tested on an implemented system.
Back to the top.
G.D. Hager and P.N. Belhumeur
Back to the top.
K. Toyama and G.D. Hager
Implemented IFA systems are extremely robust to most common types of temporary visual disturbances. They resist minor visual perturbances and recover quickly after full occlusions, illumination changes, major distractions, and target disappearances. Analysis of the algorithm's recovery times are supported by simulation results and experiments on real data. In particular, examples show that recovery times after lost tracking depend primarily on the number of objects visually similar to the target in the field of view.
Back to the top.
J. Hespanha, Z. Dodds, G.D. Hager and A.S. Morse
Back to the top.