Fall 2016

Student

September 9, 2016

Random Forests are a convenient option for performing non-parametric regression. I will discuss a novel approach to error estimation using Random Forests; the relation of Random Forest regression to kernel regression, which offers a principled approach to configuration parameter selection resulting in lower regression error; and algorithmic considerations which yield asymptotically faster training than what is available in the de facto standard R implementation.

Speaker Biography: Samuel Carliles is a graduate student in the Department of Computer Science. He has a BS and an MS in Computer Science from Johns Hopkins, and currently works as a Data Scientist at AppNexus, Inc.

Student

September 22, 2016

In recent years, advances in technology have enabled researchers to ask new questions predicated on the collection and analysis of big datasets that were previously too large to study. More specifically, many fundamental questions in neuroscience require studying brain tissue at a large scale to discover emergent properties of neural computation, consciousness, and etiologies of brain disorders. A major obstacle is to construct larger, more detailed maps (e.g., structural wiring diagrams) of the brain, known as connectomes.

Although raw data exist, challenges remain in both algorithm development and scalable image analysis to enable access to the knowledge inside. This dissertation develops, combines and tests state-of-the-art algorithms to estimate graphs and glean other knowledge across the six orders of magnitude from millimeter-scale magnetic resonance imaging to nanometer-scale electron microscopy.

This work enables scientific discovery across the community and contributes to the tools and services offered by NeuroData and the Open Connectome Project. Contributions include creating, optimizing and evaluating the first known fully-automated brain graphs in electron microscopy data and magnetic resonance imaging data; pioneering approaches to generate knowledge from X-ray tomography imaging; and identifying and solving a variety of image analysis challenges associated with building graphs suitable for discovery. These methods were applied across diverse datasets to answer questions at scales not previously explored.

Speaker Biography: William Gray Roncal is a Project Manager in the Research and Exploratory Development Department at the Johns Hopkins University Applied Physics Laboratory (APL). In 2005, Will received a Master of Electrical Engineering from the University of Southern California. He earned his Bachelor of Electrical Engineering Degree from Vanderbilt University in 2003. He is a member of the Society for Neuroscience, Eta Kappa Nu, and Tau Beta Pi.

Will applies algorithms to solve big data challenges at the intersection of multiple disciplines. Although he has experience in diverse environments ranging from undersea to outer space, he currently works in connectomics, an emerging discipline within neuroscience that seeks to create a high-resolution map of the brain.

September 22, 2016

October 4, 2016

In this talk, I will present the design of the GNU Name System (GNS), a fully decentralized and censorship-resistant name system. GNS uses cryptography to provide a privacy-enhancing alternative to DNS and existing public key infrastructures (such as X.509 certificate authorities), while giving users the desirable property of memorable names. The design of GNS incorporates the possibility of integration and coexistence with DNS.

GNS builds on ideas from the Simple Distributed Security Infrastructure (SDSI), addressing a central issue with the decentralized mapping of secure identifiers to memorable names: namely the impossibility of providing a global, secure and memorable mapping without a trusted authority. GNS uses the transitivity in the SDSI design to replace the trusted root with secure delegation of authority, thus making petnames useful to other users, while operating under the strong adversary model represented by state actors.

Speaker Biography: Christian Grothoff is leading a research team at Inria, a French national institute for applied computer science and mathematics research. He maintains GNUnet, an experimental network designed with the goal to provide privacy and security without the need for trusted third parties. He earned his PhD in computer science from UCLA, an M.S. in computer science from Purdue University, and a Diplom in mathematics from the University of Wuppertal. He is also a freelance journalist reporting on technology and national security.

Student

October 4, 2016

Context-aware applications are programs that are able to improve their performance by adapting to the current conditions, which include the user’s behavior, networking conditions, and charging opportunities. In many cases, the user’s location is an excellent predictor of the context. Thus, by predicting the user’s future location, we can predict the future conditions.

In this talk, I will discuss the techniques that we developed to identify and predict the user’s location over the next 24 hours with a minimum median accuracy of 80%. I will start by describing the user study that we conducted, and some salient conclusions from our analysis. These include our observation that cell phones sample the towers in their vicinity, which makes cell towers as-is inappropriate for use as landmarks. Motivated by this observation, I will then present two techniques for processing the cell tower traces so that landmarks more closely correspond to locations, and cell tower transitions more closely correspond to user movement. Then, I will present our prediction engine, which is based on simple sampling distributions of the form f(t, c), where t is the predicted tower, and c is a set of conditions. The conditions that we considered include the time of the day, the day of the week, the current regime, and the current tower. Our family of algorithms, called TomorrowToday, achieves 89% prediction precision across all prediction trials for predictions 30 minutes in the future. This decreases slowly for predictions further in the future, and levels off for predictions approximately 4 hours in the future, at which point we achieve 80% prediction precision across all prediction trials up to 24 hours in the future. This represents a significant improvement over NextPlace, a well-cited prediction algorithm based on non-linear time series, which achieves appropriately 80% prediction precision (self reported) for predictions 30 minutes in the future, but, unlike our predictors, which try all prediction attempts, NextPlace only attempts 7% of the prediction trials on our data set.

Speaker Biography: Neal is a PhD study at Johns Hopkins University and is being advised by Christian Grothoff. Neal’s main academic interests are in systems and security. While finishing his PhD, he worked part time on GnuPG, a widely used encryption and data authentication program.

Video Recording >>

Distinguished Lecturer

November 8, 2016

This talk will explain how computer architects contribute to information technology that is transforming our world. It will present computer architecture basics and trends since the first microprocessor in the mid-1970s. It will then discuss how present challenges to Moore’s Law will open up new directions for computer systems, including architecture as infrastructure, energy first, impact of emerging technologies, and cross-layer opportunities. Reference: CCC “21st Century Computer Architecture.”

Speaker Biography: Mark D. Hill is Gene M. Amdahl and John P. Morgridge Professor of Computer Sciences at the University of Wisconsin-Madison. Prof. Hill is a senior computer architect interested in parallel-computer system design, memory system design, and computer simulation. He developed the 3C cache miss taxonomy (compulsory, capacity, and conflict) and co-developed “sequential consistency for data-race free” that serves as a foundation of the C++ and Java memory models. He is a fellow of IEEE and the ACM, co-inventor on 35 patents, and taught more than 1000 students with 40 Ph.D. progeny so far. Hill has a PhD in computer science from the University of California, Berkeley and currently serves as Vice Chair of the Computer Community Consortium.

Student

November 9, 2016

Intraoperative 2D and 3D imaging using mobile C-arms combined with advanced image registration algorithms could overcome many of the limitations of conventional surgical navigation, streamline workflow, and enable novel applications in image-guided surgery.

This talk focuses on one particular premise in my PhD dissertation – demonstrating how to extend the utility of fluoroscopic intraoperative imaging systems (conventionally limited to providing visual feedback to the surgeon) to accurately guide and assess the delivery of various surgical devices. The solution involves a 3D-2D registration algorithm that leverages prior knowledge of the patient and surgical components to obtain quantitative assessment of 3D shape and pose from a small number of 2D radiographs obtained during surgery.

The presented system is evaluated in application to pedicle screw placement, where it can (1) provide guidance of surgical device analogous to an external tracking system; and (2) provide intraoperative quality assurance of the surgical product, potentially reducing postoperative morbidity and the rate of revision surgery. Key aspects that affect the performance of the proposed system will be discussed, including optimal selection of radiographic views, minimization of radiation dose, as well as parametric modeling of the surgical components to handle limited shape and composition information, and modeling of component deformation.

Speaker Biography: Ali Uneri is a Ph.D. candidate in Computer Science at Johns Hopkins University. His doctoral research was carried out at the I-STAR Lab in Biomedical Engineering under supervision of Jeffrey H. Siewerdsen and Russell H. Taylor. His Ph.D. dissertation includes work encompassing: (1) an extensible software platform for integrating navigational tools with cone-beam CT, including fast registration algorithms using parallel computation on general purpose GPU; (2) a 3D-2D registration approach that leverages knowledge of interventional devices for surgical guidance and quality assurance; and (3) a hybrid 3D deformable registration approach using image intensity and feature characteristics to resolve gross deformation in cone-beam CT guidance of thoracic surgery. Prior to joining Johns Hopkins University, he obtained an M.Sc. in Bioengineering from Imperial College London and worked at the Acrobot Company on the development of a surgical robot designed to assist hip and knee replacement procedures.

Video Recording >>

Distinguished Lecturer

November 17, 2016

Advances in computer and information science and engineering are providing unprecedented opportunities for research and education. My talk will begin with an overview of CISE activities and programs at the National Science Foundation and include a discussion of current trends that are shaping the future of our discipline. I will also discuss the opportunities as well as the challenges that lay ahead for our community and for CISE.

Speaker Biography: Dr. Jim Kurose is the Assistant Director of the National Science Foundation (NSF) for the Directorate of Computer and Information Science and Engineering (CISE). Dr. Kurose also serves as co-chair of the Networking and Information Technology Research and Development Subcommittee of the National Science and Technology Council Committee on Technology, facilitating the coordination of networking and information technology research and development efforts across Federal agencies. He is on leave from the University of Massachusetts Amherst, where he has served as Distinguished Professor at the School of Computer Science since 2004. His research interests include network protocols and architecture, network measurement, multimedia communication, and modeling and performance evaluation. Dr. Kurose received his Ph.D. in computer science from Columbia University and a Bachelor of Arts degree in physics from Wesleyan University. He is a Fellow of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronic Engineers (IEEE).

Video Recording >>

December 1, 2016

In this talk, I will discuss some of the more general lessons we’ve learned about visualization and data science from domain collaborations, with a focus on work on literary scholarship. These projects highlight that Data Analysis is more than just running a machine learning algorithm to build an accurate model. Working with data is a process with many stages, involving many different kinds of stakeholders, who need to do many different tasks. I will present a framework for thinking about this range of concerns, and use it to consider a number of different approaches we’ve explored to helping people use data analysis. I will discuss a number of visualization tools we’ve built that address problems throughout the data analysis process. I’ll discuss tools for building models, exploring models, comparing models, and validating models.

Many of my examples will come from the Visualizing English Print project, an effort to build approaches that enable literary scholars to bring scalable, data-centric approaches to the study of English literature of the Early Modern period (roughly 1470-1700, including Shakespeare). However, this talk will seek to emphasize the general lessons, rather than the specifics of the domain.

Speaker Biography: Michael Gleicher is a Professor in the Department of Computer Sciences at the University of Wisconsin, Madison. Prof. Gleicher is founder of the Department’s Visual Computing Group. His research interests span the range of visual computing, including data visualization, robotics, image and video processing tools, virtual reality, and character animation. His current foci are human data interaction and human robot interaction. Prior to joining the university, Prof. Gleicher was a researcher at The Autodesk Vision Technology Center and in Apple Computer’s Advanced Technology Group. He earned his Ph. D. in Computer Science from Carnegie Mellon University, and holds a B.S.E. in Electrical Engineering from Duke University. In 2013-2014, he was a visiting researcher at INRIA Rhone-Alpes. Prof. Gleicher is an ACM Distinguished Scientist.

Student seminar

December 5, 2016

Surgical educators have recommended individualized coaching for acquisition, retention and improvement of expertise in technical skills. Such one-on-one coaching is limited to institutions that can afford surgical coaches and is certainly not feasible at national and global scales. We hypothesize that automated methods that model intra-operative video, surgeon’s hand and instrument motion, and sensor data can provide effective and efficient individualized coaching. With the advent of instrumented operating rooms and training laboratories, access to such large scale intra-operative data has become feasible. Previous methods for automated skill assessment present an overall evaluation at the task/global level to the surgeons without any directed feedback and error analysis. Demonstration, if at all, is present in the form of fixed instructional videos, while deliberate practice is completely absent from automated training platforms. We believe that an effective coach should: demonstrate expert behavior (how do I do it correctly?), evaluate trainee performance (how did I do?) at task and segment-level, critique errors and deficits (where and why was I wrong?), recommend deliberate practice (what do I do to improve?), and monitor skill progress (when do I become proficient?).

In this thesis, we present new methods and solutions towards these coaching interventions in different training settings viz. virtual reality simulation, bench-top simulation and the operating room. First, we outline a summarizations-based approach for surgical phase modeling using various sources of intra-operative procedural data such as — system events (sensors) as well as crowdsourced surgical activity context. Second, we develop a new scoring method to evaluate task segments using rankings derived from pairwise comparisons of performances obtained via crowdsourcing. Third, we implement a real-time feedback and teaching framework using virtual reality simulation to present teaching cues and deficit metrics that are targeted at critical learning elements of a task. Finally, we present an integration of the above components of task progress detection, segment-level evaluation and real-time feedback towards the first end-to-end automated virtual coach for surgical training.

Speaker Biography: Anand Malpani was born in Mumbai, India. He received his B.Tech. in Electrical Engineering at the Indian Institute of Technology (IIT) Bombay in 2010. He undertook a summer research project in 2009 at the Insitut de Recherche en Communications et Cybernetechnique de Nantes under the guidance of Vincent Ricordel (Image and Video-Communication research group) where he developed and compared various tracking methods for echocardiogram sequences. He joined the Ph.D. program in Computer Science at the Johns Hopkins University in 2010 and worked under the Language of Surgery project umbrella. His dissertation under the guidance of Gregory D. Hager, focused on surgical education and simulation-based training. During this work, he developed data analytics for delivering automated surgical coaching in collaboration with multiple surgical faculty at the Johns Hopkins School of Medicine. He was awarded the Intuitive Surgical Student Fellowship in 2013. He was a Link Foundation’s Modeling, Training and Simulation Fellowship recipient in 2015 to advance surgical simulation-based training. He was a summer research intern in the Simulation team developing the da Vinci Skills Simulator at Intuitive Surgical Inc. (Sunnyvale, CA) in 2015.

Student

December 6, 2016

Accurate localization of the surgical target and adjacent normal anatomy is essential to safe and effective surgery. Preoperative computed tomography (CT) and / or magnetic resonance (MR) images offer exquisite visualization of anatomy and a valuable basis for surgical planning. Multimodality deformable image registration (DIR) can be used to bring preoperative images and planning information to a geometrically resolved anatomical context presented in intraoperative CT or cone-beam CT (CBCT). Such capability promises to improve reckoning of the surgical plan relative to the intraoperative state of patient and thereby improve surgical precision and safety. This talk focuses on advanced DIR developed for key image guidance applications in otolaryngology and spinal neurosurgery. For transoral robotic based-of-tongue surgery, a hybrid DIR method integrating a surface-based initialization and a shape-driven Demons algorithm with multi-scale optimization was developed to resolve the large deformation associated with the operative setup, with gross deformation > 30 mm. The method yielded registration accuracy of ~1.7 mm in cadaver studies. For orthopaedic spine surgery, a multiresolution free-form DIR method was developed with constraints designed to maintain the rigidity of bones within otherwise deformable transformations of surrounding soft tissue. Validation in cadaver studies demonstrated registration accuracy of ~1.4 mm and preservation of rigid-body morphology (near-ideal values of dilatation and shear) and topology (lack of tissue folding / tearing). For spinal neurosurgery, where preoperative MR is the preferred modality for delineation of tumors, the spinal cord, and nervous and vascular systems, a multimodality DIR method was developed to realize viscoelastic diffeomorphisms between MR and intraoperative CT using a modality-independent-neighborhood descriptor (MIND) and a Huber metric in a multiresolution Demons optimization. Clinical studies demonstrated sub-voxel registration accuracy (< 2 mm) and diffeomorphism of the estimated deformation (sub-voxel invertibility error = 0.001 mm and positive Jacobian determinants). These promising advances could facilitate more reliable visualization of preoperative planning data within up-to-date intraoperative CT or CBCT in support of safer, high-precision surgery.

Speaker Biography: Sureerat Reaungamornrat is a PhD candidate in Computer Science at Johns Hopkins University working under the supervision of Prof. Jeffrey H. Siewerdsen and Russell H. Taylor. Her research focuses on the development of new deformable 3D image registration methods for image-guided interventions. Her work earned the 2014 and 2016 SPIE Young Scientist awards and the 2016 Robert Wagner All-Conference Best Student Paper award. She received her Master of Science in Engineering from Johns Hopkins University for the work on novel surgical tracking configuration for mobile C-arm CBCT.

Student

December 13, 2016

The task of query-by-example search is to retrieve, from among a collection of data, the observations most similar to a given query. A common approach to this problem is based on viewing the data as vertices in a graph in which edge weights reflect similarities between observations. Errors arise in this graph-based framework both from errors in measuring these similarities and from approximations required for fast retrieval. In this thesis, we use tools from graph inference to analyze and control the sources of these errors. We establish novel theoretical results related to representation learning and to vertex nomination, and use these results to control the effects of model misspecification, noisy similarity measurement and approximation error on search accuracy. We present a state-of-the-art system for query-by-example audio search in the context of low-resource speech recognition, which also serves as an illustrative example and testbed for applying our theoretical results.

Speaker Biography: Keith Levin is a Ph.D. candidate in Computer Science at Johns Hopkins University, where he works on graph inference, with applications to speech processing and neuroscience. Keith received B.S. degrees in Psychology and Linguistics from Northeastern University in 2011. Prior to joining Johns Hopkins University, he worked as a data analyst at BBN Technologies.