Fall 2010

September 9, 2010

Data privacy is a ubiquitous concern. It is an issue that is confronted by nearly every organization, from health care providers and the payment card industry to web commerce sites. Protecting data storage servers by securing the network perimeter is becoming increasingly difficult given the number of attack vectors available and trends toward distributed data storage. Consequently, several enterprises are looking to realize access control by encryption. Encrypting data reduces the problem of data privacy from protecting all stored data to protecting small secret keys. While current encryption systems provide a powerful security tool, there exist fundamental limitations for realistic sharing of private data. In particular, there is an inherent gap between how we want to share data and our ability to express access policies in current encryption systems.

In this talk I will present a new concept called “functional encryption” that puts forth a new vision for how encryption systems should work. In functional encryption, a data provider directly expresses his data sharing policy during the encryption procedure itself. Likewise, a recipient will be able to decrypt and access data if and only if she possesses matching secret key credentials. By allowing a provider to encrypt directly, and eliminating the need to locate individual recipients, we can build much simpler systems. I will describe the challenges in realizing functional encryption systems as well as the techniques I have developed to overcome them. In addition, I will discuss work in bringing these methods to practice.

Speaker Biography: Brent Waters is an Assistant Professor at the University of Texas at Austin. Dr. Waters’ research interests are in the areas of computer security and applied cryptography. His work has focused on Identity-Based Cryptography, security of broadcast systems, and authentication of remote systems. He has award and invited papers. He both publishes and has served on the program commitees of the top technical security venues (CRYPTO, Eurocrypt, the ACM Conference on Computer and Communications Security (CCS), and the IEEE Conference on Security and Privacy). Dr. Waters has been an invited speaker in industry and at research Universities, including MIT, CMU, and Stanford. He was the keynote speaker on functional encryption at the 2008 NIST workshop on Identity-Based Encryption. Dr. Waters is a National Academy of Sciences Kavli Fellow and recipient of the NSF CAREER award and a Sloan Research Fellowship.

September 14, 2010

Bringing together health scientists, computer scientists, software engineers, and client-centered business process analysis and re-engineering is a challenge and opportunity. MDLogix (Medical Decision Logic, Inc.) has made some progress in this direction, with significant support from the Small Business Innovation Research (SBIR) program. We have built enterprise scalable web applications for clinical research. One of the main applications, the Protocol Schema and Subject Calendar module, uses logic programming (Constraint Handling Rules) as a key component. We will describe the structure and operation of this module. The current web technology can also serve as a platform for research and innovation in many areas. We will present several examples.

Speaker Biography: Allen Y. Tien, MD, MHS: My strategic approach to contribute to health and care is to enhance and expand the development and application of health science informatics and information technologies. Our software implementation approach is highly-client centered in order to assure a high level of usability and utility. To lead and contribute to these areas of research and development, I have a unique combination of training, knowledge, and expertise that spans biostatistics, psychiatric epidemiology, clinical research, developmental neuroscience, public mental health research, computer science, and software engineering. I have been developing health information technology systems since 1987. The integrative and transformative potential of my contributions stem from an interdisciplinary conceptual framework I have developed over the past 20 years. In the first decade (1986-1997) I worked in two distinct areas of health research: 1) public mental health epidemiology, prevention, and services, and 2) clinical neuroscience. During that period (at Johns Hopkins), I worked on developing a multi-level integrative model for teaching students about the range of etiologic factors and interacting developmental processes for mental disorders. Following this logic, I worked to bring measures of cognitive function into community epidemiology and prevention research. In the following decade (1998-present), I made a major transition from academic health research and teaching to entrepreneur, founding Medical Decision Logic, Inc. (“MDLogix”) with long-term visionary involvement in software purpose, architecture development, interface design, and evaluation. At the same time, I continue with intellectual interaction and contributions in the role of Adjunct Associate Professor in the Division of Health Science Informatics (DHSI) at the Johns Hopkins School of Medicine, participation with the ongoing informatics and multi-level scientific conceptualization, grant preparation, and technology considerations with the Johns Hopkins Quality and Safety Research Group (QSRG), with other MDLogix clients, and with ongoing service to the NIH community with peer review panels. In several important ways, the work carried out at MDLogix provides a unique foundation. These are: 1) tackling some difficult problems with prior Small Business Innovation Research (SBIR) support; 2) learning about the barriers to creating successful technology-based products for health research and practice; 3) engagement as a partner and vendor with leading academic medical centers; 4) creation of a state-of-the-art-leading product system (MDLogix CRMS) and health science-based web technology development platform (MDLogix HSPF); and 5) collaborations and partnerships for business growth, sustainability, and innovation.

Anocha Yimsirawattana, PhD: I have worked in the information technology field since 1990, starting as a programmer and systems analyst. I was a faculty member in departments of computer science, mathematics and electrical engineering in Bangkok, Thailand , and then in 2004 received my PhD in Computer Science with a dissertation on quantum computing. I joined MDLogix in 2005, working as a software architect and engineer. My main focus at MDLogix has been the architecture and implementation of the Protocol Schema and Subject Calendar (PSSC) module in the MDLogic Clinical Research Management System (CRMS). PSSC uses a Domain Specific Language (DSL) approach to define a temporal language for clinical research protocols, and Constraint Handling Rules (CHR) engine to provide users with adaptive scheduling capabilities. I am interested in parallel/distributed computing, specifically language and tools which allow developer to easily analyze design and implement parallel/distributed applications.

Slides >>

Distinguished Lecturer

September 28, 2010

How do you design user interfaces for an illiterate migrant worker? Can you keep five rural schoolchildren from fighting over one PC? What value is computing technology to a farmer earning a dollar a day? These kinds of questions are asked by the technical side of an multidisciplinary field called “information and communication technology for development” (ICT4D), in the expectation that computing and communication technologies can contribute to the socio-economic development of the world’s poorest communities.

In this talk, I’ll discuss the potential, as well as the limitations, of computer science as a research field to contribute to global development. The context will be MultiPoint, where multiple mice plugged into a single PC allows multiple children to interact, thus reducing the per-child cost of PCs in schools.

Speaker Biography: Kentaro Toyama (http://www.kentarotoyama.org) is a visiting researcher in the School of Information at the University of California, Berkeley. Until 2009, Kentaro was assistant managing director of Microsoft Research India, which he co-founded in 2005. At MSR India, he started the Technology for Emerging Markets research group, which conducts interdisciplinary research to understand how the world’s poorest communities interact with electronic technology and to invent new ways for technology to support their socio-economic development. In 2006, he co-founded the IEEE/ACM International Conference on Information and Communication Technologies and Development (ICTD) to provide a global platform for rigorous academic research in this field, and he remains active on its advisory board. Prior to his time in India, Kentaro did computer vision and multimedia research at Microsoft Research in Redmond, WA, USA and Cambridge, UK, and taught mathematics at Ashesi University in Accra, Ghana. Kentaro graduated from Yale with a PhD in Computer Science and from Harvard with a bachelors degree in Physics.

Distinguished Lecturer

October 7, 2010

As a computer vision researcher, I believe that the advanced technologies of image motion analysis have great opportunities to help rapid advancement of biological discovery and its transition into new clinical therapies. In collaboration with biomedical engineers, my group has been developing a system for analyzing a time-lapse microscope-image sequence, typically from a phase-contrast or differential interference contrast (DIC) microscope that can precisely and individually track a large number of cells, while they undergo migration (translocation), mitosis (division), and apoptosis (death), and could construct complete cell lineages (mother-daughter relations) of the whole cell population. Such a capability of high-throughput spatiotemporal analysis of cell behaviors allows for “engineering individual cells” – directing the migration and proliferation of tissue cells in real time in Tissue Engineering for seeding and culturing cells with hormones to induce growth of tissue.

The low signal-to-noise ratio of microscopy images, high and varying densities of cell cultures, topological complexities of cell shapes, and occurrences of cell divisions, touching and overlapping pose significant challenges to existing image-based tracking techniques. I will present the challenges, results, and excitement of the new application area of motion image analysis.

Speaker Biography: Takeo Kanade is the U. A. and Helen Whitaker University Professor of Computer Science and Robotics and the director of Quality of Life Technology Engineering Research Center at Carnegie Mellon University. He is also the director of the Digital Human Research Center in Tokyo, which he founded in 2001. He received his Doctoral degree in Electrical Engineering from Kyoto University, Japan, in 1974. After holding a faculty position in the Department of Information Science, Kyoto University, he joined Carnegie Mellon University in 1980, where he was the Director of the Robotics Institute from 1992 to 2001. Dr. Kanade works in multiple areas of robotics: computer vision, multi-media, manipulators, autonomous mobile robots, medical robotics and sensors. He has written more than 350 technical papers and reports in these areas, and holds more than 30 patents. He has been the principal investigator of more than a dozen major vision and robotics projects at Carnegie Mellon. Dr. Kanade has been elected to the National Academy of Engineering; the American Academy of Arts and Sciences; a Fellow of the IEEE; a Fellow of the ACM, a Founding Fellow of American Association of Artificial Intelligence (AAAI). The awards he has received include the Franklin Institute Bower Prize, Okawa Award, C&C Award, Joseph Engelberger Award, IEEE Robotics and Automation Society Pioneer Award, and IEEE PAMI Azriel Rosenfeld Lifetime Accomplishment Award.

Student Seminar

October 14, 2010

Small animal research allows detailed study of biological processes, disease progression and response to therapy with the potential to provide a natural bridge to the clinical environment. The Small Animal Radiation Research Platform (SARRP) is a novel and complete system capable of delivering multidirectional (focal), kilo-voltage radiation fields to targets in small animals under robotic control using cone-beam CT (CBCT) image guidance.

This talk provides a complete overview of the SARRP and extends on the calibration and radiation delivery capabilities of the system. A novel technique for the calibration of the treatment beam is presented, which employs an x-ray camera whose precise position need not be known. Different radiation delivery procedures enable the system to radiate through a series of points, representative of a complex shape. For the first time, a particularly interesting case of shell dose irradiation is challenged. The goal in peripheral dose distribution is to deliver a high dose of radiation to the shape surface, with minimal dose to the shape interior. This goal is achieved by geometrically creating a spherical shell through intersecting cylinders in the SARRP configuration. The ability to deliver a dose shell allows mechanistic research of how a tumor interacts with its microenvironment to sustain its growth and lead to its resistance or recurrence.

Distinguished Lecturer

October 26, 2010

Two decades transistor scaling in density, speed, and energy (a.k.a. Moore’s Law) have enabled microprocessor architects to deliver 1000-fold performance improvement. This dramatic improvement has enabled computing as we know it today – tiny, powerful, inexpensive, and therefore ubiquitous. Current projections suggest future scaling in density, but only decreasing improvements in transistor speed and energy. In this era of energy-constrained performance, the computing industry is engaged in a rapid, broad-based shift to increasing parallelism (multicore) from the largest data centers to small mobile devices.

In the new technology scaling landscape, more narrowly specialized designs (heterogeneity) are increasingly attractive, but computer architects have lacked a paradigm to deal with it systematically. We believe it is time to move beyond the general purpose architecture paradigm and 90/10 optimization which has served us well for 25 years, and replace it with a new paradigm, “10×10”, which divides workloads into clusters, enabling systematic exploitation of specialization in the architecture, implementation, and software. We call this new paradigm “10×10” because it divides the workloads and optimizes for 10 different 10% cases, not a monolithic 90/10. The 10×10 approach can enable 10-fold or more improvement in energy efficiency and performance compared to conventional general-purpose approaches. In addition, 10×10 has the potential to bring discipline to increasing heterogeneity in computing systems. We will also outline a few critical challenges for future computing systems in the new technology scaling landscape, including software and applications.

Speaker Biography: Dr. Andrew A. Chien is former Vice President of Research of Intel Corporation. He served as a Vice President of Intel Labs and Director of Intel Research / Future Technologies Research where he led a “bold, edgy” research agenda in disruptive technologies. He also led Intel’s external research programs and Higher Education activities. Chien launched imaginative new efforts in robotics, wireless power, sensing and perception, nucleic acid sequencing, networking, cloud, and ethnography. Working with external partners, Chien was instrumental in creation of the Universal Parallel Computing Research Centers (UPCRC) focused on parallel software, the Open Cirrus Consortium focused on Cloud computing, and Intel’s Exascale Research program. For more than 20 years, Chien has been a global research and education leader, and an active researcher in parallel computing, computer architecture, programming languages, networking, clusters, grids, and cloud computing. Chien’s previous positions include the Science Applications International Corporation Endowed Chair Professor in the Department of Computer Science and Engineering, and founding Director of the Center for Networked Systems at the University of California at San Diego. While at UCSD, he also founded Entropia, a widely-known Internet Grid computing startup. From 1990 to 1998, Chien was a Professor of Computer Science at the University of Illinois at Urbana-Champaign with joint appointments at the National Center for Supercomputing Applications (NCSA) where he was a research leader for parallel computing software and hardware, and developed the well-known Fast Messages, HPVM, and Windows NT Supercluster systems. Dr. Chien is a Fellow of the American Association for Advancement of Science (AAAS), Fellow of the Association for Computing Machinery (ACM), Fellow of Institute of Electrical and Electronics Engineers (IEEE), and has published over 130 technical papers. Chien currently serves on the Board of Directors for the Computing Research Association (CRA), Advisory Board of the National Science Foundation’s Computing and Information Science and Engineering (CISE) Directorate, and Editorial Board of the Communications of the Association for Computing Machinery (CACM). Chien received his Bachelor’s in electrical engineering, Master’s and Ph.D. in computer science from the Massachusetts Institute of Technology.

November 2, 2010

This talk will describe a new architecture that supports high performance access to medical images in across multiple enterprises. The architecture will support 3D viewing of large (> 1000 slice) CT and MRI studies from remote locations (> 1000 miles) using broadband networks in under 1 second. It will also describe a novel storage architecture that supports 100’s a petabytes distributed across multiple data centers.

Speaker Biography: Dr. Philbin is a senior healthcare information technology executive with broad experience in both business and research. He has expertise in management, medical imaging informatics, agile project management, software development, digital image processing, and storage systems. Dr. Philbin is currently the Senior Director of Medical Imaging at Johns Hopkins Medicine. As such, he is responsible for all aspects of medical imaging informatics at both the School of Medicine and the Johns Hopkins Health System. He oversees a staff of over 50 professionals dedicated to the Department of Radiology and Enterprise Medical Imaging. He is also Co-Director of the Research Center for Biomedical and Imaging Informatics. Upon joining Hopkins in 2005 he led the effort to transform the Radiology Department into a filmless and paperless operation. That transformation was successfully completed in two years with an ROI of 45%. Medical images are available on 12,000 workstations across the campuses. 65% of them are available within 10 minutes of study completion and 98% are available within one hour. Dr. Philbin led the effort that created an enterprise medical image archive that will span all of Johns Hopkins Medicine including 5 hospitals and 12 Outpatient Centers. At this point the archive is storing images from Radiology, Cardiology, Radiation Oncology, Vascular Surgery and many other specialties. Dr. Philbin was also the founding CEO of two successful startups. The first, Signafy, was a spinout of NEC Research Laboratories which won the DVD copy protection trials in Hollywood. After Signafy won the trails NEC reacquired the company. His second startup was named Emphora, which made storage caching and database acceleration software. Emphora was acquired by Storage Networks in 2001. Prior to his startup experience, Dr. Philbin worked for NEC’s research laboratory in Princeton, NJ. There he created the world’s first global cluster computer. This system was the first truly global cluster with multiple nodes located in Tokyo, Japan; Princeton, New Jersey; and Bonn, Germany. This system was used for many scientific advances. Dr. Philbin has a B.A., M.S., M.Phil. and Ph.D. from Yale University. He is also a Certified Imaging Informatics professional.

November 4, 2010

Biomolecular sequences evolve under processes that include substitutions, insertions and deletions (indels), as well as other events, such as duplications. The estimation of evolutionary history from sequences is then used to answer fundamental questions about biology, and also has applications in a wide range of biomedical research.

From a computational perspective, however, phylogenetic (evolutionary) tree estimation is enormously hard: all favored approaches are NP-hard, and even the best heuristics can take months or years on only moderately large datasets. Furthermore, while there are very good heuristics for estimating trees from sequences that are already placed in a multiple alignment (a step that is used when sequences evolve with indels), errors in alignment estimation produce errors in tree estimation, and the standard alignment estimation methods fail to produce highly accurate alignments on large highly divergent datasets. Thus, the estimation of highly accurate phylogenetic trees from large datasets of unaligned sequences is beyond the scope of standard methods.

In this talk, I will describe new algorithmic tools that my group has developed, and which make it possible, for the first time, to obtain highly accurate estimates of trees from very large datasets, even when the sequences have evolved under high rates of substitution and indels. In particular, I will describe SAT´e (Liu et al. 2009, Science Vol 324, no. 5934). SAT´e simultaneously estimates a tree and alignment; our study shows that SAT´e is shows that SAT´e is very fast, and produces dramatically more accurate trees and alignments than competing methods, even on datasets with 1000 taxa and high rates of indels and substitutions. I will also describe our new method, DACTAL (not yet submitted). DACTAL stands for “Divide-and-Conquer Trees without Alignments”, and uses an iterative procedure combined with a novel divide-and-conquer strategy to estimate trees from unaligned sequences. Our study, using both real and simulated data, shows that DACTAL produces trees of higher accuracy than SAT´e, and does so without ever constructing an alignment on the entire set of sequences. Furthermore, DACTAL is extremely fast, producing highly accurate estimates of datasets in a few days that take many other methods years. Time permitting, I will show how DACTAL can be used to improve the speed and accuracy of other phylogeny reconstruction methods, and in particular in the context of phylogenetic analyses of whole genomes.

Speaker Biography: Tandy Warnow is David Bruton Jr. Centennial Professor of Computer Sciences at the University of Texas at Austin. Her research combines mathematics, computer science, and statistics to develop improved models and algorithms for reconstructing complex and large-scale evolutionary histories in both biology and historical linguistics. Tandy received her PhD in Mathematics at UC Berkeley under the direction of Gene Lawler, and did postdoctoral training with Simon Tavare and Michael Waterman at USC. She received the National Science Foundation Young Investigator Award in 1994, the David and Lucile Packard Foundation Award in Science and Engineering in 1996, a Radcliffe Institute Fellowship in 2006, and a Guggenheim Foundation Fellowship for 2011. Tandy is a member of five graduate programs at the University of Texas, including Computer Science; Ecology, Evolution, and Behavior; Molecular and Cellular Biology; Mathematics; and Computational and Applied Mathematics. Her current research focuses on phylogeny and alignment estimation for very large datasets (10,000 to 500,000 sequences), estimating species trees from collections of gene trees, and genome rearrangement phylogeny estimation.

Student Seminar

November 5, 2010

Statistical natural language processing can be difficult for morphologically rich languages. The observed vocabularies of such languages are very large, since each word may have been inflected for morphological properties like person, number, gender, tense, or others. This unfortunately masks important generalizations, leads to problems with data sparseness and makes it hard to generate correctly inflected text.

This thesis tackles the problem of inflectional morphology with a novel, unified statistical approach. We present a generative probability model that can be used to learn from plain text how the words of a language are inflected, given some minimal supervision. In other words, we discover the inflectional paradigms that are implicit, or hidden, in a large unannotated text corpus.

This model consists of several components: a hierarchical Dirichlet process clusters word tokens of the corpus into lexemes and their inflections, and graphical models over strings — a novel graphical-model variant — model the interactions of multiple morphologically related type spellings, using weighted finite-state transducers as potential functions.

We present the components of this model, from weighted finite-state transducers parameterized as log-linear models, to graphical models over multiple strings, to the final non-parametric model over a corpus, its lexemes, inflections, and paradigms. We show experimental results for several tasks along the way, including a lemmatization task in multiple languages and, to demonstrate that parts of our model are applicable outside of morphology as well, a transliteration task. Finally, we show that learning from large unannotated text corpora under our non-parametric model significantly improves the quality of predicted word inflections.

Distinguished Lecturer

November 11, 2010

Information about the syntax and semantics of terms in context is essential for reliable inference in a variety of document annotation and retrieval tasks. We hypothesize that we can derive most of the relevant information from explicit and implicit relationships between terms in the masses of Web content and user interactions with that content. We have developed graph-based algorithms that efficiently combine many small pieces of textual evidence to bootstrap broad-coverage syntactic and semantic classifiers from small sets of manually-annotated examples.

Speaker Biography: Fernando Pereira is research director at Google. His previous positions include chair of the Computer and Information Science department at the University of Pennsylvania, head of the Machine Learning and Information Retrieval department at AT&T Labs, and research and management positions at SRI International. He received a Ph.D. in Artificial Intelligence from the University of Edinburgh in 1982, and he has over 120 research publications on natural language processing, machine learning, speech recognition, bioinformatics, databases, and logic programming as well as several patents. He was elected Fellow of the American Association for Artificial Intelligence in 1991 for his contributions to computational linguistics and logic programming, and he was president of the Association for Computational Linguistics in 1993.

Student Seminar

November 15, 2010

Wireless sensor networks (WSNs) provide novel insights into our world by enabling data collection at unprecedented spatial and temporal scales. Over the past decade, the WSN community has significantly improved the success rate and the efficiency of WSN deployments through progress in networking primitives, operating systems, programming languages, and sensor mote hardware design. However, as WSN deployments grow in scale and are embedded in more places, their performance becomes increasingly susceptible to interference from external interference, as well as poor radio coordination.

This thesis is a multi-targeted effort to study three types of radio interference in the setting of large-scale WSNs: intra-network, external, and protocol interference. The first part of the dissertation introduces Typhoon and WRAP protocols to minimize interference from concurrent transmitters; Typhoon leverages channel diversity to improve data dissemination performance, and WRAP uses a token-passing mechanism to coordinate data collection traffic in a network. Then, the dissertation characterizes the external interference from 802.11 traffic to 802.15.4 networks, and it introduces BuzzBuzz that uses levels of redundancy to improve the 802.15.4 link packet reception ratio. The final part of the dissertation presents ViR that multiplexes a single radio to satisfy requests from applications on the same node.

Distinguished Lecturer

November 19, 2010

Creating and implementing a new programming language is an exercise in computational thinking. This talk looks at how computational thinking pervades the process of designing and implementing a programming language and how students can learn computational thinking by creating their own languages.

Speaker Biography: Alfred V. Aho is the Lawrence Gussman Professor of Computer Science at Columbia University. Prior to joining Columbia, Prof. Aho was the director of the Computing Sciences Research Center at Bell Labs, the research center that invented Unix, C, and C++. Prof. Aho is well known for his many papers and textbooks on algorithms, data structures, programming languages and compilers. He created the Unix programs egrep and fgrep and is a coauthor of the popular pattern-matching language AWK. Prof. Aho is a Fellow of the ACM, AAAS, Bell Labs, and IEEE. He has been awarded the IEEE John von Neumann Medal and is a member of the National Academy of Engineering and of the American Academy of Arts and Sciences.

November 23, 2010

One goal of Artificial Intelligence is to enable the creation of robust, fully autonomous agents that can coexist with us in the real world. Such agents will need to be able to learn, both in order to correct and circumvent their inevitable imperfections, and to keep up with a dynamically changing world. They will also need to be able to interact with one another, whether they share common goals, they pursue independent goals, or their goals are in direct conflict. This talk will present current research directions in machine learning, multiagent reasoning, and robotics, and will advocate their unification within concrete application domains. Ideally, new theoretical results in each separate area will inform practical implementations while innovations from concrete multiagent applications will drive new theoretical pursuits, and together these synergistic research approaches will lead us towards the goal of fully autonomous agents.

Speaker Biography: Dr. Peter Stone is an Alfred P. Sloan Research Fellow, Guggenheim Fellow, Fulbright Scholar, and Associate Professor in the Department of Computer Sciences at the University of Texas at Austin. He received his Ph.D in Computer Science in 1998 from Carnegie Mellon University. From 1999 to 2002 he was a Senior Technical Staff Member in the Artificial Intelligence Principles Research Department at AT&T Labs – Research. Peter’s research interests include machine learning, multiagent systems, robotics, and e-commerce. In 2003, he won a CAREER award from the National Science Foundation for his research on learning agents in dynamic, collaborative, and adversarial multiagent environments. In 2004, he was named an ONR Young Investigator for his research on machine learning on physical robots. In 2007, he was awarded the prestigious IJCAI 2007 Computers and Thought award, given once every two years to the top AI researcher under the age of 35.

December 7, 2010

The usual algorithms in Computer Vision Structure and Motion are large non convex problems, often involving large numbers of variables (in excess of a million). Through a variety of simple techniques, it is possible to find initial solutions that will serve for initialization of iterative algorithms with good results. Nevertheless, it is interesting to find algorithms that are provably optimum, finding a guaranteed global minimum. This talk gives a summary of work in this area, involving a variety of techniques, including L-infinity optimization, branch and bound and convex verification techniques.

Speaker Biography: Richard Hartley received the BSc degree from the Australian National University (ANU) in 1971, the MSc degree in computer science from Stanford University in 1972, and the PhD degree in mathematics from the University of Toronto, Canada, in 1976. He is currently a professor and member of the computer vision group in the Department of Information Engineering at ANU. He also belongs to the Vision Science Technology and Applications Program in National ICT Australia, a government-funded research institute. He did his PhD thesis in knot theory and worked in this area for several years before joining the General Electric (GE) Research and Development Center, where he worked from 1985 to 2001. During the period 1985-1988, he was involved in the design and implementation of computer-aided design tools for electronic design and created a very successful design system called the Parsifal Silicon Compiler, described in his book Digit Serial Computation. In 1991, he was awarded GE’s Dushman Award for this work. Around 1990, he developed an interest in computer vision, and in 2000, he coauthored (with Andrew Zisserman) a book on multiple-view geometry. He has authored more than 100 papers in knot theory, geometric voting theory, computational geometry, computer-aided design, and computer vision and holds 32 US patents. He is a member of the IEEE.