Spring 2017

Student

February 20, 2017

Scripting languages are immensely popular in many domains. They are characterized by a number of features that make it easy to develop small applications quickly – flexible data structures, simple syntax and intuitive semantics. However they are less attractive at scale: scripting languages are harder to debug, difficult to refactor and suffers performance penalties. Many research projects have tackled the issue of safety and performance for existing scripting languages with mixed results: the considerable flexibility offered by their semantics also makes them significantly harder to analyze and optimize.

Previous research from our lab has led to the design of a typed scripting language built specifically to be flexible without losing static analyzability. In this dissertation, we present a framework to exploit this analyzability, with the aim of producing a more efficient implementation

Our approach centers around the concept of adaptive tags: specialized tags attached to values that represent how it is used in the current program. Our framework abstractly tracks the flow of deep structural types in the program, and thus can efficiently tag them at runtime. Adaptive tags allow us to tackle key issues at the heart of performance problems of scripting languages: the framework is capable of performing efficient dispatch in the presence of flexible structures.

Speaker Biography: Pottayil Harisanker Menon is a Ph.D. candidate in Computer Science at the Johns Hopkins University. He is advised by Prof. Scott Smith and is a member of the Programming Languages Lab. Hari’s current research focuses on creating flexible languages and making them run fast. His general research interests include the design of programming languages, type systems and compilers.

Student

February 22, 2017

Image-based tracking of the c-arm continues to be a critical and challenging problem for many clinical applications due to its widespread use in many computer-assisted procedures that rely upon its accuracy for further planning, registration, and reconstruction tasks. In this thesis, I present a variety of approaches to improve current c-arm tracking methods and devices for intra-operative procedures.

The first approach presents a novel two-dimensional fiducial comprising a set of coplanar conics and an improved single-image pose estimation algorithm that addresses segmentation errors using a mathematical equilibration approach. Simulation results show an improvement in the mean rotation and translation errors by factors of 4 and 1.75, respectively, as a result of using the proposed algorithm. Experiments using real data obtained by imaging a simple precisely machined model consisting of three coplanar ellipses retrieve pose estimates that are in good agreement with those obtained by a ground truth optical tracker. This two-dimensional fiducial can be easily placed under the patient allowing a wide field of view for the motion of the c-arm.

The second approach employs learning-based techniques to two-view geometrical theories. A demonstrative algorithm is used to simultaneously tackle matching and segmentation issues of features segmented from pairs of acquired images. The corrected features can then be used to retrieve the epipolar geometry which can ultimately provide pose parameters using a one-dimensional fiducial. I formulate the problem of match refinement for epipolar geometry estimation in a reinforcement-learning framework. Experiments demonstrate the ability to both reject false matches and fix small localization errors in the segmentation of true noisy matches in a minimal number of steps.

The third approach presents a feasibility study for an approach that entirely eliminates the use of tracking fiducials. It relies only on preoperative data to initialize a point-based model that is subsequently used to iteratively estimate the pose and the structure of the point-like intraoperative implant using three to six images simultaneously. This method is tested in the framework of prostate brachytherapy in which preoperative data including planned 3-D locations for a large number of point-like implants called seeds is usually available. Simultaneous pose estimation for the c-arm for each image and localization of the seeds is studied in a simulation environment. Results indicate mean reconstruction errors that are less than 1.2 mm for noisy plans of 84 seeds or fewer. These are attained when the 3D mean error introduced to the plan as a result of adding Gaussian noise is less than 3.2 mm.

Speaker Biography: Maria S. Ayad received the B. Sc. degree in Electronics and Communications from the Faculty of Engineering, Cairo University, in 2001. She also earned a diploma in Networks from the Information Technology Institute in Cairo (iTi) in 2002 and an M.S.E. in Computer Science from Johns Hopkins University in 2009. She was inducted into the Upsilon Pi Epsilon (UPE) honor society in 2008. She received the Abel Wolman Fellowship by the Whiting School of Engineering in 2006 and a National Science Foundation Graduate Research Fellowship in 2008.

Her research focuses on pose estimation, reconstruction, and estimating structure from motion for image-guided medical procedures and computer-assisted surgery. Her 2009 paper has been awarded the best student paper award in the Visualization, Image-Guided Procedures, and Modeling Track of the 2009 SPIE Medical Imaging conference.

She has been working as an electrical patent examiner at the United States Patent and Trademark Office since 2013.

Video Recording >>

March 2, 2017

Infrastructure-as-a-Service (IaaS) provides shared computing resources that users can access over the Internet, which has revolutionized the way that computing resources are utilized. Instead of buying and maintaining their own physical servers, users can lease virtualized compute and storage resources from shared cloud servers and pay only for the time that they use the leased resources. Yet as more and more users take advantage of this model, cloud providers face highly dynamic user demands for their resources, making it difficult for them to maintain consistent quality-of-service (QoS). We propose to use price incentives to manage these user demands. We investigate two types of pricing: spot pricing and volume discounts. Spot pricing creates an auction in which users can submit bids for spare cloud resources; however, these resources may be withdrawn at any time if users’ bids are too low and/or resources become unavailable due to other users’ demands. Volume discount pricing incentivizes users to submit longer-term jobs, which provide more stable resource utilization. We provide insights into these pricing schemes by quantifying user demands with different prices, and design optimal pricing and resource utilization strategies for both users and cloud providers.

Speaker Biography: Carlee Joe-Wong is an assistant professor in the ECE department at Carnegie Mellon University, working at CMU’s Silicon Valley Campus. She received her Ph.D. from Princeton University in 2016 and is primarily interested in incentives and resource allocation for computer and information networks. In 2013–2014, Carlee was the Director of Advanced Research at DataMi, a startup she co-founded from her data pricing research. She received the INFORMS ISS Design Science Award in 2014 and the Best Paper Award at IEEE INFOCOM 2012, and was a National Defense Science and Engineering Graduate Fellow (NDSEG) from 2011 to 2013.

Distinguished Lecturer

March 7, 2017

Advances in computer and information science and engineering are providing unprecedented opportunities for research and education. My talk will begin with an overview of CISE activities and programs at the National Science Foundation and include a discussion of current trends that are shaping the future of our discipline. I will also discuss the opportunities as well as the challenges that lay ahead for our community and for CISE.

Speaker Biography: Ben Shneiderman (http://www.cs.umd.edu/~ben) is a Distinguished University Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory (http://www.cs.umd.edu/hcil/), and a Member of the UM Institute for Advanced Computer Studies (UMIACS) at the University of Maryland. He is a Fellow of the AAAS, ACM, IEEE, and NAI, and a Member of the National Academy of Engineering, in recognition of his pioneering contributions to human-computer interaction and information visualization. His contributions include the direct manipulation concept, clickable highlighted web-links, touchscreen keyboards, dynamic query sliders for Spotfire, development of treemaps, novel network visualizations for NodeXL, and temporal event sequence analysis for electronic health records.

Ben is the co-author with Catherine Plaisant of Designing the User Interface: Strategies for Effective Human-Computer Interaction (6th ed., 2016) http://www.awl.com/DTUI/. With Stu Card and Jock Mackinlay, he co-authored Readings in Information Visualization: Using Vision to Think (1999). His book Leonardo’s Laptop (MIT Press) won the IEEE book award for Distinguished Literary Contribution. He co-authored, Analyzing Social Media Networks with NodeXL (www.codeplex.com/nodexl) (2010) with Derek Hansen and Marc Smith. Shneiderman’s latest book is The New ABCs of Research: Achieving Breakthrough Collaborations (Oxford, April 2016.)

March 8, 2017

Rhythms guide our lives. Almost every biological process reflects a roughly 24-hour periodicity known as a circadian rhythm. Living against these body clocks can have severe consequences for physical and mental well-being, with increased risk for cardiovascular disease, cancer, obesity and mental illness. However, circadian disruptions are becoming increasingly widespread in our modern world. As such, there is an urgent need for novel technological solutions to address these issues. In this talk, I will introduce the notion of “Circadian Computing” – technologies that support our innate biological rhythms. Specifically, I will describe a number of my recent projects in this area. First, I will present novel sensing and data-driven methods that can be used to assess sleep and related circadian disruptions. Next, I will explain how we can model and predict alertness, a key circadian process for cognitive performance. Third, I will describe a smartphone based tool for maintaining circadian stability in patients with bipolar disorder. To conclude, I will discuss a vision for how Circadian Computing can radically transform healthcare, including by augmenting performance, enabling preemptive care for mental health patients, and complementing current precision medicine initiatives.

Speaker Biography: Saeed Abdullah is a Ph.D. candidate in Information Science at Cornell University, advised by Tanzeem Choudhury. Abdullah works on developing novel data-driven technologies to improve health and well-being. His research is inherently interdisciplinary and he has collaborated with psychologists, psychiatrists, and behavioral scientists. His work has introduced assessment and intervention tools across a number of health related domains including sleep, cognitive performance, bipolar disorder, and schizophrenia. Saeed’s research has been recognized through several accolades, including the $100,000 Heritage Open mHealth Challenge winner, a best paper award, and an Agile Research Project award from the Robert Wood Johnson Foundation.

Student

March 10, 2017

In the past few years, Deep Learning has become the method of choice for producing state-of-the-art results on machine learning problems involving images, text, and speech. The explosion of interest in these techniques has resulted in a large number of successful applications, but relatively few studies exploring the nature of and reason for that success.

This dissertation is an examination of the inductive biases that underpin the success of deep learning, focusing in particular on the success of Convolutional Neural Networks (CNNs) on image data. We show that CNNs rely on a type of spatial structure being present in the data, and then describe ways this type of structure can be quantified. We further demonstrate that a similar type of inductive bias can be explicitly introduced into a variety of other techniques, including non-connectionist ones. The result is both a better understanding of why deep learning works, and a set of tools that can be used to improve the performance of a wide range of machine learning tools on these tasks.

Speaker Biography: Benjamin R. Mitchell received a B.A. in Computer Science from Swarthmore College in 2005, and a M.S.E. in Computer Science from the Johns Hopkins University in 2008. He received a certification from the JHU Preparing Future Faculty Teaching Academy in 2016.

He has worked as a Teaching Assistant and a Research Assistant from 2005 to 2008, and he has been an Instructor at the Johns Hopkins University since 2009. He has taught courses including Introductory Programming in Java, Intermediate Programming in C/C++, Artificial Intelligence, and Computer Ethics. In 2015, he received the Professor Joel Dean Award for Excellence in Teaching, and he was a finalist for the Whiting School of Engineering Excellence in Teaching Award in 2016.

In addition to the field of machine learning, he has peer-reviewed publications in fields including operating systems, mobile robotics, medicalrobotics, and semantic modeling.

Video Recording >>

March 16, 2017

There is an extensive literature in machine learning demonstrating extraordinary ability to predict labels based off an abundance of data, such as object and voice recognition. Multiple scientific domains are poised to go through a data revolution, in which the quantity and quality of data will increase dramatically over the next several years. One such area is neuroscience, where novel devices will collect data orders of magnitude larger than current measurement technologies. In addition to being a “big data” problem, this data is incredibly complex. Machine learning approaches can adapt to this complexity to give state-of-the-art predictions. However, in many neurological disorders we are most interested in methods that are not only good at prediction, but also interpretable such that they can be used to design causal experiments and interventions.

Towards this end, I will discuss my work using machine learning to analyze local field potentials recorded from electrodes implanted at many sites of the brain concurrently. The machine learning techniques I developed learn predictive and interpretable features that can generate data-driven hypotheses. Specifically, I first use ideas from dimensionality reduction and factor analysis to map the collected high-dimensional signals to a low-dimensional feature space. Each feature is designed as a Gaussian Process with a novel kernel to capture multi-region spectral power and phase coherence, which have neural correlates. In addition, these interpretable features estimate directionality of information flow. By associating behavior outcomes with the learned features or brain networks, we can then generate a data-driven hypothesis about how the networks should be modulated in a causal experiment. Collaborators have developed optogenetic techniques to test these theories in a mouse model of depression, validating the machine learning approach. I will also discuss current efforts to incorporate additional information sources and apply these ideas to other data types.

Speaker Biography: David Carlson is currently a Postdoctoral Research Scientist at Duke University in the Department of Electrical and Computer Engineering and the Department of Psychiatry and Behavioral Sciences. From August 2015 to July 2016, he completed postdoctoral training in the Data Science Institute and the Department of Statistics at Columbia University focused on neural data science. He received his Ph.D., M.S., and B.S.E. in Electrical and Computer Engineering from Duke University in 2015, 2014, and 2010 respectively. He received the Charles R. Vail Memorial Outstanding Scholarship Award in 2013 and the Charles R. Vail Memorial Outstanding Graduate Teaching Award in 2014.

Video Recording >>

March 28, 2017

Robots hold promise in assisting people in a variety of domains including healthcare services, household chores, collaborative manufacturing, and educational learning. In supporting these activities, robots need to engage with humans in socially cooperative interactions in which they work together toward a common goal in a socially intuitive manner. Such interactions require robots to coordinate actions, predict task intent, direct attention, and convey relevant information to human partners. In this talk, I will present how techniques in human-computer interaction, artificial intelligence, and robotics can be applied in a principled manner to create and study socially cooperative interactions between humans and robots. I will demonstrate social, cognitive, and task benefits of effective human-robot teams in various application contexts. I will also describe my current research that focuses on building socially cooperative robots to facilitate behavioral intervention for children with autism spectrum disorders (ASD). I will discuss broader impacts of my research, as well as future directions of my research program to develop personalized social technologies.

Speaker Biography: Chien-Ming Huang is a Postdoctoral Associate in the Department of Computer Science at Yale University, leading the NSF Expedition project on Socially Assistive Robotics. Dr. Huang received his Ph.D. in Computer Science at the University of Wisconsin–Madison in 2015, his M.S. in Computer Science at the Georgia Institute of Technology in 2010, and his B.S. in Computer Science at National Chiao Tung University in Taiwan in 2006. Dr. Huang’s research has been published at selective conferences such as HRI (Human-Robot Interaction) and RSS (Robotics: Science and Systems). His research has also been awarded a Best Paper Runner-Up at RSS 2013 and has received media coverage from MIT Technology Review, Tech Insider, and Science Nation. In 2016, Dr. Huang was invited to give an RSS early career spotlight talk at AAAI.

Video Recording >>

March 30, 2017

Sleep, stress and mental health have been major health issues in modern society. Poor sleep habits and high stress, as well as reactions to stressors and sleep habits, can depend on many factors. Internal factors include personality types and physiological factors and external factors include behavioral, environmental and social factors. What if 24/7 rich data from mobile devices could identify which factors influence your bad sleep or stress problem and provide personalized early warnings to help you change behaviors, before sliding from a good to a bad health condition such as depression?

In my talk, I will present a series of studies and systems we have developed at MIT to investigate how to leverage multi-modal data from mobile/wearable devices to measure, understand and improve mental wellbeing.

First, I will talk about methodology and tools I developed for the SNAPSHOT study, which seeks to measure Sleep, Networks, Affect, Performance, Stress, and Health using Objective Techniques. To learn about behaviors and traits that impact health and wellbeing, we have measured over 200,000 hours of multi-sensor and smartphone use data as well as trait data such as personality from about 300 college students exposed to sleep deprivation and high stress.

Second, I will describe statistical analysis and machine learning models to characterize, model, and forecast mental wellbeing using the SNAPSHOT study data. I will discuss behavioral and physiological markers and models that may provide early detection of a changing mental health condition.

Third, I will introduce recent projects that might help people to reflect on and change their behaviors for improving their wellbeing.

I will conclude my talk by presenting my research vision and future directions in measuring, understanding and improving mental wellbeing.

Speaker Biography: Akane Sano is a Research Scientist at MIT Media Lab, Affective Computing Group. Her research focuses on mobile health and affective computing. She has been working on measuring and understanding stress, sleep, mood and performance from ambulatory human long-term data and designing intervention systems to help people be aware of their behaviors and improve their health conditions. She completed her PhD at the MIT Media Lab in 2015. Before she came to MIT, she worked for Sony Corporation as a researcher and software engineer on wearable computing, human computer interaction and personal health care. Recent awards include the Best Paper Award at the NIPS 2016 Workshop on Machine Learning for Health and the AAAI Spring Symposium Best Presentation Award.

Video Recording >>

Distinguished Lecturer

April 11, 2017

This talk will introduce a kinematic and dynamic framework for creating a representative model of an individual. Building on results from geometric robotics, a method for formulating a geometric dynamic identification model is derived. This method is validated on a robotic arm, and tested on healthy and muscular dystrophy subjects to determine the utility as a clinical tool. In order to capture kinematics of the human body we used Visual observations, either motion capture or the Kinect camera. In order to obtain the dynamical parameters of the individual, we used force plate and force sensors for robot attached to human hand. The work in progress is to use Ultrasound scanner and Acoustic myography in order to estimate the muscle strength. Our current representative kinematic and dynamic model outperformed conventional height/mass scaled models. This allows us for rapid, quantitative measurements of an individual, with minimal retraining required for clinicians. These tools are then used to develop a prescriptive model for developing assistive devices. This framework is then used to develop a novel system for human assistance. A prototype device is developed and tested. The prototype is lightweight, uses minimal energy, and can provide an augmentation of 82% for providing hammer curl assistance.

Speaker Biography: Ruzena Bajcsy (LF’08) received the Master’s and Ph.D. degrees in electrical engineering from Slovak Technical University, Bratislava, Slovak Republic, in 1957 and 1967, respectively, and the Ph.D. in computer science from Stanford University, Stanford, CA, in 1972. She is a Professor of Electrical Engineering and Computer Sciences at the University of California, Berkeley, and Director Emeritus of the Center for Information Technology Research in the Interest of Science (CITRIS). Prior to joining Berkeley, she headed the Computer and Information Science and Engineering Directorate at the National Science Foundation. Dr. Bajcsy is a member of the National Academy of Engineering and the National Academy of Science Institute of Medicine as well as a Fellow of the Association for Computing Machinery (ACM) and the American Association for Artificial Intelligence.

Video Recording >>

April 18, 2017

Speaker Biography: Dr. Chute is the Bloomberg Distinguished Professor of Health Informatics, Professor of Medicine, Public Health, and Nursing at Johns Hopkins University, and Chief Research Information Officer for Johns Hopkins Medicine. He received his undergraduate and medical training at Brown University, internal medicine residency at Dartmouth, and doctoral training in Epidemiology at Harvard. He is Board Certified in Internal Medicine and Clinical Informatics, and a Fellow of the American College of Physicians, the American College of Epidemiology, and the American College of Medical Informatics. His career has focused on how we can represent clinical information to support analyses and inferencing, including comparative effectiveness analyses, decision support, best evidence discovery, and translational research. He has had a deep interest in semantic consistency, harmonized information models, and ontology. His current research focuses on translating basic science information to clinical practice, and how we classify dysfunctional phenotypes (disease). He became founding Chair of Biomedical Informatics at Mayo in 1988, retiring from Mayo in 2014, where he remains an emeritus Professor of Biomedical Informatics. He has been PI on a large portfolio of research including the HHS/Office of the National Coordinator (ONC) SHARP (Strategic Health IT Advanced Research Projects) on Secondary EHR Data Use, the ONC Beacon Community (Co-PI), the LexGrid projects, Mayo’s CTSA Informatics, and several NIH grants including one of the eMERGE centers from NGHRI, which focus upon genome wide association studies against shared phenotypes derived from electronic medical records. He has been active on many HIT standards efforts and currently chairs the World Health Organization (WHO) ICD-11 Revision.

Video Recording >>

CS 30th Anniversary:

April 20, 2017

The panelists will reminisce about the 50 years of computing at Johns Hopkins University. Members from the Department of Computer Science from over the last 30 years will provide a historical overview about the computer science program and its evolution. They will explore research questions that were considered important 50 years ago and their relevance today as well as the new questions that have emerged and what we can expect to be important 25 or 50 years from now.

Panelists: Mandell Bellmore, Jon Liebman, Rao Kosaraju, Ben Langmead, and Vladimir Braverman

Moderator: Russell Taylor

Video Recording >>

April 25, 2017

Relation extraction systems are the backbone of many end-user applications, including question answering and web search. They are also increasingly used in clinical text analysis with EHR data to advance goals in population health. Advances in machine learning have led to new neural models for learning effective representations directly from data. Yet for many tasks, years of research have created hand-engineered features that yield state of the art performance. This is the case in relation extraction, in which a system consumes natural language and produces a structured machine readable representation of relationships between entities, such as extracting medication references from clinical notes.

Speaker Biography: Mark Dredze is an Assistant Research Professor in Computer Science at Johns Hopkins University and a research scientist at the Human Language Technology Center of Excellence. He is also affiliated with the Center for Language and Speech Processing, the Center for Population Health Information Technology, and holds a secondary appointment in the Department of Health Sciences Informatics in the School of Medicine. He obtained his PhD from the University of Pennsylvania in 2009. Prof. Dredze has wide-ranging research interests developing machine learning models for natural language processing (NLP) applications. Within machine learning, he develops new methods for graphical models, deep neural networks, topic models and online learning, and has worked in a variety of learning settings, such as semi-supervised learning, transfer learning, domain adaptation and large-scale learning. Within NLP he focuses on information extraction but has considered a wide range of NLP tasks, including syntax, semantics, sentiment and spoke language processing. Beyond his work in core areas of computer science, Prof. Dredze has pioneered new applications of these technologies in public health informatics, including work with social media data, biomedical articles and clinical texts. He has published widely in health journals including the Journal of the American Medical Association (JAMA), the American Journal of Preventative Medicine (AJPM), Vaccine, and the Journal of the American Medical Informatics Association (JAMIA). His work is regularly covered by major media outlets, including NPR, the New York Times and CNN.

Student

May 3, 2017

Numerical simulations present challenges as they reach exascale because they generate petabyte-scale data that cannot be saved without interrupting the simulation due to I/O constraints. Data scientists must be able to reduce, extract, and visualize the data while the simulation is running, which is essential for in transit and post analysis. Next generation architectures in supercomputing includes a burst buffer technology composed of SSDs primarily for the use of checkpointing the simulation in case a restart is required. In the case of turbulence simulations, this checkpoint provides an opportunity to perform analysis on the data without interrupting the simulation.

First, we present a method of extracting velocity data in high vorticity regions. This method requires calculating the vorticity of the entire dataset and identifying regions where the threshold is above a specified value. Next we create a 3D stencil from values above the threshold and dilate the stencil. Finally we use the stencil to extract velocity data from the original dataset. The result is a dataset that is over an order of magnitude smaller and contains all the data required to study extreme events and visualization of vorticity.

The next extraction utilizes the zfp lossy compressor to compress the entire velocity dataset. The compressed representation results in an dataset an order of magnitude smaller than the raw simulation data. This provides the researcher approximate data not captured by the velocity extraction. The error introduced is bounded, and results in a dataset that is visually indistinguishable from the original dataset.

Finally we present Myrcene, a modular distributed parallel extraction system. This system allows a data scientist to run the previously mentioned extraction algorithms in a distributed parallel cluster of burst buffer nodes. The extraction algorithms are built as modules for the system and run in parallel on burst buffer nodes. A feature extraction coordinator synchronizes the simulation with the extraction process. A data scientist only needs to write one module that performs the extraction or visualization on a single subset of data and the system will execute that module at scale on burst buffers, managing all the communication, synchronization, and parallelism required to perform the analysis.

Speaker Biography: Stephen S. Hamilton is a Lieutenant Colonel in the US Army. In 2008 Stephen received a Master of Science in Software Engineering from Auburn University. He earned his Bachelor of Computer Science from West Point in 1998. He taught at West Point from 2008-2011, and was promoted to Assistant Professor in 2010. He is a member of Upsilon Pi Epsilon and Phi Kappa Phi. Stephen will join the Army Cyber Institute in West Point, NY as a Research Scientist in the summer of 2017.

Video Recording >>

ACM Annual Lecture in Memory of Nathan Krasnopoler

May 8, 2017

Over the last seven years, the CTSRD Project at SRI International and the University of Cambridge has been performing intensive hardware-software co-design to redesign core computer architecture around improved security. This talk will introduce Capability Hardware Enhanced RISC Instructions (CHERI), which extend a conventional RISC processor architecture with support for capabilities — a long-discussed but rarely deployed security approach focused on efficiently implementing the Principle of Least Privilege. CHERI is a hybrid capability architecture, in that it blends these historic ideas with contemporary hardware and software design, yielding vastly improved security with strong software compatibility yet acceptable performance overhead for fine-grained memory protection and mitigation — and orders-of-magnitude performance improvement for compartmentalised software designs. These techniques directly support vulnerability mitigation for the C and C++ programming languages, interfering with exploit techniques from buffer overflows to ROP and JOP, as well as protecting against future unknown attack techniques via scalable application-level privilege reduction. Prototyped via hardware-software co-design, and evaluated on FPGA with support from DARPA, the CHERI processor prototype is able to run adapted versions of the FreeBSD operating system (CheriBSD) and open-source application stack, and is targeted by an extended version of the Clang/LLVM compiler. This talk introduces the CHERI architecture and potential applications, and will also describe current research directions.

Speaker Biography: Dr Robert N. M. Watson is a University Senior Lecturer (Associate Professor) at the University of Cambridge Computer Laboratory, where he works across the areas of security, operating systems, and computer architecture. As Principal Investigator of the CTSRD project, he led work on the CHERI architecture from the “ISA up”, designing the hardware-software security model, and has led the CHERI software development team working on OS support, compiler support, and applications. He also has research interests in network-stack design, OS tracing and profiling tools, and capability-based operating systems including the Capsicum security model now deployed in FreeBSD. In prior industrial research, he developed the MAC Framework employed for OS kernel access-control extensibility and sandboxing in FreeBSD, Mac OS X, iOS, and Junos. He is an author of the Design and Implementation of the FreeBSD Operating System (Second Edition).