Zachary Pezzementi :: Research

Projects | Publications | Code

My research focuses on novel applications of automated sensing to both fully automated and human-cooperative robotic systems. Recent projects work toward modeling, analyzing, and providing guidance for robotic surgery, manipulation, and exploration tasks, using vision and touch sensing. My thesis work focuses on object recognition using array-type tactile force sensors.


Human Detection and Tracking in Agriculture

Building on the tractor automation work below, we developed a large-scale datset and benchmark for evaluating person detection capabilities in off-road environments, specifically focused on agriculture. The dataset includes nearly 100k labeled frames of stereo video with GPS localization. We evaluated several leading person detection approaches and presented one of our own as well.

See the NREC project page for details.
Human Detection Examples

Orchard Tractor Automation

We developed an tractor system for carrying out autonomous operations in orange orchards, capable of carrying out operations such as mowing and spraying under only remote operator supervision. The system is equipped with stereo cameras to detect obstacles in the vehicle's path and additional cameras to provide context to the remote supervisor in case of detections. It has been used for over a thousand kilometers of autonomous operation in a working orchard.

See the NREC project page for details.
Orchard Tractor

Manipulating and Perceiving Simultaneously

The goal of this project is to develop a system, consisting of a robotic hand equipped with tactile sensors, capable of autonomously exploring an environment and identifying objects that have been encountered before, while manipulating the unknown objects as necessary. The ability to explore an unknown object using solely haptic information requires expansion of the state of the art both in object recognition and in manipulation, in addition to the application of simultaneous localization and mapping techniques to the haptic domain. Our approach focuses first on the adaptation of feature-based object recognition methods from the computer vision domain to haptic object recognition.

Schunk Anthropomorphic Hand

Visual Tracking of Articulated Objects

Many objects encountered in the real world can be described as kinematic chains of parts with roughly uniform appearance characteristics. We developed a GPU-accelerated method for tracking such objects in single- or multi-channel (eg, stereo) video streams in diverse domains. The method consists, in brief, of modeling the appearance of the various object parts, then rendering a 3D model of the target object geometry from each view, and measuring the consistency of the resulting image with an appearance class probability map derived from the video images. It's been demonstrated in both surgical and generic settings.

A collage of visual tracking images

Virtual Fixtures for Human-Machine Cooperative Manipulation

We suggest that dynamics beyond the first order are important in a number of tasks in both open and minimally-invasive surgery. In response, we have designed guidance virtual fixtures which focus not just on the position of the tool, but also its velocity. These fixtures are intended for use in providing guidance to replicate motions, such as those of an expert surgeon demonstrating a procedure to a novice.

For more information, see the Human Machine Collaborative Systems overview.
A collage of virtual fixturing images

Surgical Modeling

We are interested in modeling and understanding the underlying structures in surgical motions. We would like to eventually use this understanding to create benchmarks for surgical skill evaluation, to develop methods for better surgical training and to automate the documentation of surgeries for libraries.

See the Surgical Modeling project website for more details.
A suturing sample image

Retinal OCT Registration

Optical coherance tomography is a non-invasive imaging modality analogous to ultrasound using light rays. Registration of pre-operative OCT images to more familiar intra-operative fundus images allows precise location of pathologies which would otherwise be invisible.

An OCT montage

Peer-Reviewed Publications

The following pertains to all of the IEEE publications below:

© IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.




The following code was developed over the course of my Ph.D. research and seemed potentially useful to a wider audience, so I've released it under GPL-v3. Please refer to the documentation of each for detailed info.

Valid HTML 4.01 Transitional Valid CSS! Lynx Inspected

This page first went online January 2007. Last updated 11/30/17. Copyright Zachary Pezzementi