Fall 2005

September 30, 2005

We address the problem of designing algorithms that self-stabilize while at the same time tolerate an eventual fraction of permanent Byzantine failures. The presence of Byzantine nodes presents a special challenge due to the “ambition” of malicious nodes to hamper stabilization when the system tries to recover from a corrupted “transient” state.

We present a distributed “Pulse Synchronization” protocol, which targets at invoking regular and tightly synchronized pulses in such an environment, without using any assumption about the existence of any external timing reference. The system may be in an arbitrary state in which the communication network may behave arbitrarily and in which any number of the nodes may behave arbitrarily. The pulse synchronization algorithm will eventually converge, once the communication network resumes delivering messages within a bounded time (some d time units), and the fraction of permanent Byzantine nodes, f, obeys the n >= 3f+1 ratio (for a network of size n). The attained pulse synchronization tightness is 3d with a convergence time of O(f) communication rounds.

In the talk we present a Clock Synchronization algorithm that takes advantage of the Pulse Synchronization protocol. We will also discuss a general scheme to convert a Byzantine algorithm into a self-stabilizing Byzantine algorithm, using the Pulse Synchronization protocol.

Distinguished Lecturer

October 7, 2005

Algorithmic methods from computer vision and machine learning are dramatically changing the practice of health care and the exploration of fundamental issues in neuroscience. By coupling knowledge of tissue response, atlases of normal anatomy, and statistical models of shape variation, these methods are used to build detailed, patient-specific reconstructions of neuroanatomical structure from MRI imagery. Such structural models can be automatically augmented with information about function (using fMRI), and about connectivity (using DT-MRI) to create detailed models of a patient’s brain. These models are routinely used for surgical planning – how to reach the target tumor with minimal damage to nearby critical structures; and for surgical navigation – guiding the surgeon to the target site rapidly and safely.

By combining with statistical models of population variation, these methods can also be used to investigate basic neuroscience questions – how different are the shapes of subcortical structures between normal subjects and patients with a specific disease (such as schizophrenia or Alzheimer’s); how do these shapes change with development in children, or with administration of pharmaceuticals; how do physiological properties differ between populations (such as the local structure of fiber orientation in white matter tracts). These computational methods provide a toolkit for exploring the structure and connectivity of neuroanatomical structures, in normal subjects and in diseased patients.

Distinguished Lecturer

October 20, 2005

The proliferation of networked embedded devices such as wireless sensors ushers in an entirely new class of computing platforms. We need new ways to organize and program them. Unlike existing platforms, systems such as sensor networks are decentralized, embedded in physical world, and interact with people. In addition to computing, energy and bandwidth resources are constrained and must be negotiated. Uncertainty, both in systems and about the environment, is a given. Many tasks require collaboration among devices, and the entire network may have to be regarded as a processor.

We argue that the existing node-centric programming of embedded devices is inadequate and unable to scale up. We need new service architectures, inter-operation protocols, programming models that are resource-aware and resource-efficient across heterogeneous devices that can range from extremely limited sensor motes to more powerful servers. I will supplement these discussions with concrete examples arising from our own work and the work of others.

Speaker Biography: Feng Zhao is a Senior Researcher at Microsoft, where he manages the Networked Embedded Computing Group. He received his PhD in Electrical Engineering and Computer Science from MIT and has taught at Stanford University and Ohio State University. Dr. Zhao was a Principal Scientist at Xerox PARC and founded PARC?s sensor network research effort. He serves as the founding Editor-In-Chief of ACM Transactions on Sensor Networks, and has authored or co-authored more than 100 technical papers and books, including a recent book published by Morgan Kaufmann – Wireless Sensor Networks: An information processing approach. He has received a number of awards, and his work has been featured in news media such as BBC World News, BusinessWeek, and Technology Review.

October 21, 2005

As powerful and affordable computers and sensors become virtually omnipresent, constructing highly intelligent and convenient computation systems has never been so promising. Vision holds great promise in building advanced human-computer interaction systems. We investigate various techniques to integrate passive vision into different interaction environments.

First, we propose a novel approach to integrate visual tracking into a haptics systems. Traditional haptic environments require that the user must be attached to the haptic device at all times, even though force feedback is not always being rendered. We design and implement an augmented reality system called VisHap that uses visual tracking to seamlessly integrate force feedback with tactile feedback to generate a “complete” haptic experience. The VisHap framework allows the user to interact with combinations of virtual and real objects naturally, thereby combining active and passive haptics. The flexibility and extensibility of our framework is promising in that it supports many interaction modes and allows further integration with other augmented reality systems.

Second, we propose a new methodology for vision-based human-computer interaction called the Visual Interaction Cues (VICs) paradigm. VICs is based on the concept of sharing perceptual space between the user and the computer. Each interaction component is represented as a localized region in the image(s). We propose to model gestures based on the streams of extracted visual cues in the local space, thus avoiding the problem of globally tracking the user(s). Efficient algorithms are proposed to capture hand shape and motion. We investigate different learning and modeling techniques including neural networks, Hidden Markov Models and Bayesian classifiers to recognize postures and dynamic gestures.

Since gestures are in essence a language with individual low-level gestures analogous to a word in conventional languages, a high-level gesture language model is essential for robust and efficient recognition of continuous gestures. To that end, we have constructed a high-level language model that integrates a set of low-level gestures into a single, coherent probabilistic framework. In the language model, every low-level gesture is called a gesture word and a composite gesture is a sequence of gesture words, which are contextually and temporally constrained. We train the model via supervised or unsupervised learning techniques. A greedy inference algorithm is proposed to allow efficient online processing of continuous gestures. We have designed a large-scale gesture experiment that involves sixteen subjects and fourteen gestures. The experiment shows the robustness and efficacy of our system in modeling a relative large gesture vocabulary involving many users. Most of the users also consider our gesture system is comparable or more natural and comfortable than traditional user interfaces with a mouse.

Distinguished Lecturer

November 1, 2005

During the past five years we have seen a wealth of new cryptographic constructions based on an algebraic tool called `pairings.’ These constructions give new key management mechanisms that are often simpler than what is possible with traditional cryptography. They also lead to public-key cryptographic primitives better suited for bandwidth constrained environments. This talk will survey some of these new constructions and pose a few open problems in this area. More specifically, we will present a recent broadcast encryption system and a short digital signature. The talk will be self contained.

Distinguished Lecturer

November 11, 2005

Point-based methods have a long history in graphics for rendering, but their use in modeling and simulation is more recent. Shape representations based on sampled points faithfully reflect several 3-D acquisition technologies and point-based techniques can provide a flexible representation of geometry for situations where forming and maintaining a complete mesh can be cumbersome or complex. This talk will briefly describe some current work in simulating wide area contacts, large-scale deformation and fracture, collision-detection, and qualitative shape analysis of point-sampled data — all carried out using meshless methods. The irregular and dynamic sampling these applications require creates new challenges and leads to methods with a distinctly more combinatorial and topological character.

Speaker Biography: Leonidas Guibas obtained his Ph.D. from Stanford in 1976, under the supervision of Donald Knuth. His main subsequent employers were Xerox PARC, MIT, and DEC/SRC. He has been at Stanford since 1984 as Professor of Computer Science, where he heads the Geometric Computation group within the Graphics Laboratory. He is also part of the AI Laboratory and the Bio-X Program. Professor Guibas’ interests span computational geometry, geometric modeling, computer graphics, computer vision, robotics, ad hoc communication and sensor networks, and discrete algorithms — all areas in which he has published and lectured extensively. At Stanford he has developed new courses in algorithms and data structures, geometric modeling, geometric algorithms, sensor networks, and biocomputation. Professor Guibas is an ACM Fellow.

Distinguished Lecturer

November 18, 2005

Wireless sensor networks (WSN), composed of large numbers of small devices that self-organize, are being investigated for a wide variety of applications. Two key advantages of these networks over more traditional sensor networks are that they can be dynamically and quickly deployed, and that they can provide fine-grained sensing. Applications, such as emergency response to natural or manmade disasters, detection and tracking, and fine grained sensing of the environment are key examples of applications that can benefit from these types of WSNs. Current research for these systems is widespread. However, many of the proposed solutions are developed with simplifying assumptions about wireless communication and the environment, even though the realities of wireless communication and environmental sensing are well known. Many of the solutions are evaluated only by simulation. In this talk I describe a fully implemented system consisting of a suite of more than 30 synthesized protocols. The system supports a power aware surveillance and tracking application running on 203 motes and evaluated in a realistic, large-area environment. Technical details and evaluations are presented for power management, dynamic group management, and for various system implementation issues. Several illustrations of how real world environments render some previous solutions unusable will also be given.

Speaker Biography: Professor John A. Stankovic is the BP America Professor in the Computer Science Department at the University of Virginia. He recently served as Chair of the department, completing two terms (8 years). He is a Fellow of both the IEEE and the ACM. He also won the IEEE Real-Time Systems Technical Committee’s Award for Outstanding Technical Contributions and Leadership. Professor Stankovic also served on the Board of Directors of the Computer Research Association for 9 years. Before joining the University of Virginia, Professor Stankovic taught at the University of Massachusetts where he won an outstanding scholar award. He has also held visiting positions in the Computer Science Department at Carnegie-Mellon University, at INRIA in France, and at the Scuola Superiore S. Anna in Pisa, Italy. He was the Editor-in-Chief for the IEEE Transactions on Distributed and Parallel Systems and is a founder and co-editor-in-chief for the Real-Time Systems Journal. He was also General Chair for ACM SenSys 2004 and will serve as General Chair for ACM/IEEE Information Processing in Sensor Networks (IPSN) 2006. His research interests are in distributed computing, real-time systems, operating systems, and wireless sensor networks. Prof. Stankovic received his PhD from Brown University.