2016-17 PhD Graduates

Benjamin R. Mitchell

Defense Date: Friday, March 10, 2017
Title: The Spatial Inductive Bias of Deep Learning
Primary Advisor: John Sheppard
Abstract: In the past few years, Deep Learning has become the method of choice for producing state‐of‐the‐art results on machine learning problems involving images, text, and speech. The explosion of interest in these techniques has resulted in a large number of successful applications, but relatively few studies exploring the nature of and reason for that success. This dissertation is an examination of the inductive biases that underpin the success of deep learning, focusing in particular on the success of Convolutional Neural Networks (CNNs) on image data. We show that CNNs rely on a type of spatial structure being present in the data, and then describe ways this type of structure can be quantified. We further demonstrate that a similar type of inductive bias can be explicitly introduced into a variety of other techniques, including non‐connectionist ones. The result is both a better understanding of why deep learning works, and a set of tools that can be used to improve the performance of a wide range of machine learning tools on these tasks.
Biography: Benjamin R. Mitchell received a B.A. in Computer Science from Swarthmore College in 2005, and a M.S.E. in Computer Science from the Johns Hopkins University in 2008. He received a certification from the JHU Preparing Future Faculty Teaching Academy in 2016. He has worked as a Teaching Assistant and a Research Assistant from 2005 to 2008, and he has been an Instructor at the Johns Hopkins University since 2009. He has taught courses including Introductory Programming in Java, Intermediate Programming in C/C++, Artificial Intelligence, and Computer Ethics. In 2015, he received the Professor Joel Dean Award for Excellence in Teaching, and he was a finalist for the Whiting School of Engineering Excellence in Teaching Award in 2016. In addition to the field of machine learning, he has peer‐reviewed publications in fields including operating systems, mobile robotics, medical robotics, and semantic modeling.

Pottayil Harisanker Menon

Defense Date: Monday, February 20, 2017
Title: Safe, Fast and Easy: Towards Scalable Scripting Languages
Primary Advisor: Scott Smith
Abstract: Scripting languages are immensely popular in many domains. They are characterized by a number of features that make it easy to develop small applications quickly ‐ flexible data structures, simple syntax and intuitive semantics. However they are less attractive at scale: scripting languages are harder to debug, difficult to refactor and suffers performance penalties. Many research projects have tackled the issue of safety and performance for existing scripting languages with mixed results: the considerable flexibility offered by their semantics also makes them significantly harder to analyze and optimize. Previous research from our lab has led to the design of a typed scripting language built specifically to be flexible without losing static analyzability. In this dissertation, we present a framework to exploit this analyzability, with the aim of producing a more efficient implementation Our approach centers around the concept of adaptive tags: specialized tags attached to values that represent how it is used in the current program. Our framework abstractly tracks the flow of deep structural types in the program, and thus can efficiently tag them at runtime. Adaptive tags allow us to tackle key issues at the heart of performance problems of scripting languages: the framework is capable of performing efficient dispatch in the presence of flexible structures.
Biography: Pottayil Harisanker Menon is a Ph.D. candidate in Computer Science at the Johns Hopkins University. He is advised by Prof. Scott Smith and is a member of the Programming Languages Lab. Hari's current research focuses on creating flexible languages and making them run fast. His general research interests include the design of programming languages, type systems and compilers.

Greg Vorsanger

Defense Date: Thursday, December 15, 2016
Title: Streaming Algorithms for High Throughput Massive Datasets
Primary Advisor: Vova Braverman
Abstract: In the last 20 years, the field of streaming algorithms has looked to address the theoretical limitations of processing massive data streams. While many great theoretical results are solved as generally as possible, many problems are formally hard in the general case. In this thesis we show that using theoretical methods to design algorithms for practical problems is a valuable problem solving methodology for large datasets. By focusing in on specific cases as opposed to broad and general ones, this method can provide novel and useful approaches to computer science problems that may be hard to solve in the general setting. In this talk I will cover the material from my graduate work, with a focus on my current project, a streaming problem related to clustering which we call "Fuzzy Heavy Hitters". This problem focuses on grouping high dimensional data in an online, high throughput setting. Topics covered in the thesis include the effect of subsampling in the streaming model, measuring the independence of two streams, and large frequency moments.
Biography: Greg Vorsanger is a Ph.D. candidate in Computer Science at Johns Hopkins University. He studies streaming algorithms, solving computational problems for massive datasets using small amounts of computing resources. Greg received his B.S in Computer Engineering, and M.S in Security Informatics at Johns Hopkins. In addition to his graduate work, Greg works at Raytheon BBN Technologies as a Staff Scientist.

Keith Levin

Defense Date: Tuesday, December 13, 2016
Title: Graph Inference with Applications to Low‐Resource Audio Search and Indexing
Primary Advisor: Ben Van Durme
Abstract: The task of query‐by‐example search is to retrieve, from among a collection of data, the observations most similar to a given query. A common approach to this problem is based on viewing the data as vertices in a graph in which edge weights reflect similarities between observations. Errors arise in this graph‐based framework both from errors in measuring these similarities and from approximations required for fast retrieval. In this thesis, we use tools from graph inference to analyze and control the sources of these errors. We establish novel theoretical results related to representation learning and to vertex nomination, and use these results to control the effects of model misspecification, noisy similarity measurement and approximation error on search accuracy. We present a state‐of‐the‐art system for query‐by‐example audio search in the context of low‐resource speech recognition, which also serves as an illustrative example and testbed for applying our theoretical results.
Biography: Keith Levin is a Ph.D. candidate in Computer Science at Johns Hopkins University, where he works on graph inference, with applications to speech processing and neuroscience. Keith received B.S. degrees in Psychology and Linguistics from Northeastern University in 2011. Prior to joining Johns Hopkins University, he worked as a data analyst at BBN Technologies.

Sureerat Reaungamornrat

Defense Date: Tuesday, December 6, 2016
Title: Deformable Image Registration for Surgical Guidance using Intraoperative Cone‐Beam CT
Primary Advisor: Jeff Siewerdsen
Abstract: Accurate localization of the surgical target and adjacent normal anatomy is essential to safe and effective surgery. Preoperative computed tomography (CT) and / or magnetic resonance (MR) images offer exquisite visualization of anatomy and a valuable basis for surgical planning. Multimodality deformable image registration (DIR) can be used to bring preoperative images and planning information to a geometrically resolved anatomical context presented in intraoperative CT or cone‐beam CT (CBCT). Such capability promises to improve reckoning of the surgical plan relative to the intraoperative state of patient and thereby improve surgical precision and safety. This talk focuses on advanced DIR developed for key image guidance applications in otolaryngology and spinal neurosurgery. For transoral robotic based‐of‐tongue surgery, a hybrid DIR method integrating a surface‐based initialization and a shape‐driven Demons algorithm with multi‐scale optimization was developed to resolve the large deformation associated with the operative setup, with gross deformation > 30 mm. The method yielded registration accuracy of ~1.7 mm in cadaver studies. For orthopaedic spine surgery, a multiresolution free‐form DIR method was developed with constraints designed to maintain the rigidity of bones within otherwise deformable transformations of surrounding soft tissue. Validation in cadaver studies demonstrated registration accuracy of ~1.4 mm and preservation of rigid‐body morphology (near‐ideal values of dilatation and shear) and topology (lack of tissue folding / tearing). For spinal neurosurgery, where preoperative MR is the preferred modality for delineation of tumors, the spinal cord, and nervous and vascular systems, a multimodality DIR method was developed to realize viscoelastic diffeomorphisms between MR and intraoperative CT using a modality‐independent‐neighborhood descriptor (MIND) and a Huber metric in a multiresolution Demons optimization. Clinical studies demonstrated sub‐voxel registration accuracy (< 2 mm) and diffeomorphism of the estimated deformation (sub‐voxel invertibility error = 0.001 mm and positive Jacobian determinants). These promising advances could facilitate more reliable visualization of preoperative planning data within up‐to‐date intraoperative CT or CBCT in support of safer, high‐precision surgery.
Biography: Sureerat Reaungamornrat is a PhD candidate in Computer Science at Johns Hopkins University working under the supervision of Prof. Jeffrey H. Siewerdsen and Russell H. Taylor. Her research focuses on the development of new deformable 3D image registration methods for image‐guided interventions. Her work earned the 2014 and 2016 SPIE Young Scientist awards and the 2016 Robert Wagner All‐Conference Best Student Paper award. She received her Master of Science in Engineering from Johns Hopkins University for the work on novel surgical tracking configuration for mobile C‐arm CBCT.

Anand Malpani

Defense Date: Monday, December 5, 2016
Title: Automated Virtual Coach for Surgical Training
Primary Advisor: Greg Hager
Abstract: Surgical educators have recommended individualized coaching for acquisition, retention and improvement of expertise in technical skills. Such one‐on‐one coaching is limited to institutions that can afford surgical coaches and is certainly not feasible at national and global scales. We hypothesize that automated methods that model intra‐operative video, surgeon's hand and instrument motion, and sensor data can provide effective and efficient individualized coaching. With the advent of instrumented operating rooms and training laboratories, access to such large scale intra‐operative data has become feasible. Previous methods for automated skill assessment present an overall evaluation at the task/global level to the surgeons without any directed feedback and error analysis. Demonstration, if at all, is present in the form of fixed instructional videos, while deliberate practice is completely absent from automated training platforms. We believe that an effective coach should: demonstrate expert behavior (how do I do it correctly?), evaluate trainee performance (how did I do?) at task and segment‐level, critique errors and deficits (where and why was I wrong?), recommend deliberate practice (what do I do to improve?), and monitor skill progress (when do I become proficient?). In this thesis, we present new methods and solutions towards these coaching interventions in different training settings viz. virtual reality simulation, bench‐top simulation and the operating room. First, we outline a summarizations‐based approach for surgical phase modeling using various sources of intra‐operative procedural data such as ‐‐ system events (sensors) as well as crowdsourced surgical activity context. Second, we develop a new scoring method to evaluate task segments using rankings derived from pairwise comparisons of performances obtained via crowdsourcing. Third, we implement a real‐time feedback and teaching framework using virtual reality simulation to present teaching cues and deficit metrics that are targeted at critical learning elements of a task. Finally, we present an integration of the above components of task progress detection, segment‐level evaluation and real‐time feedback towards the first end‐to‐end automated virtual coach for surgical training.
Biography: Anand Malpani was born in Mumbai, India. He received his B.Tech. in Electrical Engineering at the Indian Institute of Technology (IIT) Bombay in 2010. He undertook a summer research project in 2009 at the Insitut de Recherche en Communications et Cybernetechnique de Nantes under the guidance of Vincent Ricordel (Image and Video‐ Communication research group) where he developed and compared various tracking methods for echocardiogram sequences. He joined the Ph.D. program in Computer Science at the Johns Hopkins University in 2010 and worked under the Language of Surgery project umbrella. His dissertation under the guidance of Gregory D. Hager, focused on surgical 2 education and simulation‐based training. During this work, he developed data analytics for delivering automated surgical coaching in collaboration with multiple surgical faculty at the Johns Hopkins School of Medicine. He was awarded the Intuitive Surgical Student Fellowship in 2013. He was a Link Foundation's Modeling, Training and Simulation Fellowship recipient in 2015 to advance surgical simulation‐based training. He was a summer research intern in the Simulation team developing the da Vinci Skills Simulator at Intuitive Surgical Inc. (Sunnyvale, CA) in 2015.

Colin Lea

Defense Date: Monday, November 28, 2016
Title: Multi-Modal Models for Fine-grained Action Segmentation in Situated Environments
Primary Advisor: Greg Hager
Abstract: Automated methods for analyzing human activities from video or sensor data are critical for enabling new applications in human-robot interaction, surgical data modeling, video summarization, and beyond. Despite decades of research in the fields of robotics and computer vision, current approaches are inadequate for modeling complex activities outside of constrained environments or without using heavily instrumented sensor suites. In this dissertation, I address the problem of fine-grained action segmentation by developing solutions that generalize from domain-specific to general-purpose for applications in surgical workflow, surveillance, and cooking. A key technical challenge, which is central to this dissertation, is how to capture complex temporal patterns from sensor data. For a given task, users may perform the same action at different speeds or styles, and each user may carry out actions in a different order. I present a series of temporal models that address these modes of variability. First, I define the notion of a convolutional action primitive, which captures how low-level sensor signals change as a function of the action a user is performing. I generalize this idea to video with a Spatiotemporal Convolutional Neural Network, which captures relationships between objects in an image and how they change temporally. Lastly, I discuss a hierarchical variant that applies to video or sensor data, called a Temporal Convolutional Network (TCN), which models actions at multiple temporal scales. In certain domains (e.g., surgical training), TCNs can be used to successfully bridge the gap in performance between domainspecific and general-purpose solutions. A key scientific challenge concerns the evaluation of predicted action segmentations. In many applications, the definition of an action may be ill-defined and if you ask two different annotators when a given action starts and stops they may give answers that are seconds apart. I argue that the standard action segmentation metrics are insufficient for evaluating real-world segmentation performance and propose two alternatives. Qualitatively, these metrics are better at capturing the efficacy of models in the described applications. I conclude with a case-study on surgical workflow analysis, which has the potential to improve surgical education and operating room efficiency. Current work almost exclusively relies on extensive instrumentation, which is difficult and costly to acquire. I show that our spatiotemporal video models are capable of capturing important surgical attributes (e.g., organs, tools) and achieve state of the art performance on two challenging datasets. The models and methodology described have demonstrably improved the ability to model complex human activities, in many cases without sophisticated instrumentation.
Biography: Colin Lea is finishing his Ph.D. in Computer Science at Johns Hopkins University where he works on finegrained action analysis for applications in robotics, surgery, and beyond. He received his B.S. in Mechanical Engineering at the University at Buffalo Honors College in 2011 where his work ranged from computer vision systems for autonomous ground vehicles to haptic interaction. Colin was a National Science Foundation Graduate Research Fellow from 2012 to 2015 and an Intuitive Surgical Research Fellow from 2011 to 2012. As a graduate student he led the JHU Robo Challenge, an outreach effort for local middle and high schoolers, from 2013 to 2016. Colin will join Oculus Research in Pittsburgh after completing his Ph.D.

Tuo Zhao

Defense Date: Monday, November 28, 2016
Title: Compute Faster and Learn Better: Machine Learning via NonconvexModel-based Optimization
Primary Advisor: Raman Arora
Abstract: Nonconvex optimization naturally arises in many machine learning problems. Machine learning researchers exploit various nonconvex formulations to gain modeling flexibility, estimation robustness, adaptivity, and computational scalability. Although classical computational complexity theory has shown that solving nonconvex optimization is generally NP-hard in the worst case, practitioners have proposed numerous heuristic optimization algorithms, which achieve outstanding empirical performance in real-world applications. To bridge this gap between practice and theory, we propose a new generation of model-based optimization algorithms and theory, which incorporate the statistical thinking into modern optimization. Particularly, when designing practical computational algorithms, we take the underlying statistical models into consideration. Our novel algorithms exploit hidden geometric structures behind many nonconvex optimization problems, and can obtain global optima with the desired statistics properties in polynomial time with high probability.
Biography: Tuo Zhao is a Ph.D. Candidate in Department of Computer Science at Johns Hopkins University working with Prof. Han Liu and Prof. Raman Arora. He was a visiting student in Department of Operations Research and Financial Engineering at Princeton University from 2014 to 2016. He was the core member of the JHU team winning the INDI ADHD 200 global competition on fMRI imaging-based diagnosis classification in 2011. He received Siebel scholarship in 2014, Baidu Fellowship in 2015, and CDB Scholarship for Outstanding Graduates Abroad in 2016. He received 2016 ASA Best Student Paper Award on Statistical Computing, and 2016 INFORMS Best Paper Award on Data Mining. He will join H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology as an assistant professor in 2017 Spring.

Ali Uneri

Defense Date: Wednesday, November 9, 2016
Title: Known‐Component 3D‐2D Registration for Surgical Guidance and Quality Assurance
Primary Advisor: Jeff Siewerdsen
Abstract: Intraoperative 2D and 3D imaging using mobile C‐arms combined with advanced image registration algorithms could overcome many of the limitations of conventional surgical navigation, streamline workflow, and enable novel applications in image‐guided surgery. This talk focuses on one particular premise in my PhD dissertation – demonstrating how to extend the utility of fluoroscopic intraoperative imaging systems (conventionally limited to providing visual feedback to the surgeon) to accurately guide and assess the delivery of various surgical devices. The solution involves a 3D‐2D registration algorithm that leverages prior knowledge of the patient and surgical components to obtain quantitative assessment of 3D shape and pose from a small number of 2D radiographs obtained during surgery. The presented system is evaluated in application to pedicle screw placement, where it can (1) provide guidance of surgical device analogous to an external tracking system; and (2) provide intraoperative quality assurance of the surgical product, potentially reducing postoperative morbidity and the rate of revision surgery. Key aspects that affect the performance of the proposed system will be discussed, including optimal selection of radiographic views, minimization of radiation dose, as well as parametric modeling of the surgical components to handle limited shape and composition information, and modeling of component deformation.
Biography: Ali Uneri is a Ph.D. candidate in Computer Science at Johns Hopkins University. His doctoral research was carried out at the I‐STAR Lab in Biomedical Engineering under supervision of Jeffrey H. Siewerdsen and Russell H. Taylor. His Ph.D. dissertation includes work encompassing: (1) an extensible software platform for integrating navigational tools with cone‐beam CT, including fast registration algorithms using parallel computation on general purpose GPU; (2) a 3D‐2D registration approach that leverages knowledge of interventional devices for surgical guidance and quality assurance; and (3) a hybrid 3D deformable registration approach using image intensity and feature characteristics to resolve gross deformation in cone‐beam CT guidance of thoracic surgery. Prior to joining Johns Hopkins University, he obtained an M.Sc. in Bioengineering from Imperial College London and worked at the Acrobot Company on the development of a surgical robot designed to assist hip and knee replacement procedures.

Nishikant Dishmukh

Defense Date: Friday, October 14, 2016
Title: Real‐time Elastography Systems
Primary Advisor: Russell Taylor
Abstract: Ultrasound Elastography is a technique to distinguish between hard and soft tissues inside human bodies. This method requires a mechanically induced palpation motion and measure the tissue displacement to generate the elastography image. The displacement is small for stiff tissues, and hence the resulting elasticity map reflects the presence of a cancerous tumor or burned tissue caused by ablation therapy. In this talk, I will discuss the acceleration techniques that we used to make elastography near real‐time using graphics processing unit (GPU) that will allow surgeons to use this technique in early cancer detection using non‐invasive, no radiation, and low‐cost ultrasound modality. I will discuss several applications of tracked ultrasound elastography, integration with the da Vinci surgical system. In the technique we developed of online tracked ultrasound elastography (O‐TRuE), we can track the hand motion of the operators to determine the in‐plane motion of the required ultrasound raw images. Motivated by this research, I developed a technique to eliminate the use of external tracking by imagebased tracking. In the final part, I will present the five‐dimensional ultrasound system that combines the conventional ultrasound 3D B‐mode (also called sonography) with the 3D elastography volumes and visualized over time.
Biography: Nishikant is a Ph.D. candidate at The Johns Hopkins University, Department of Computer Science and is being advised by Drs. Emad Boctor, Gregory Hager, and Russell Taylor. Nishikant’s interest is in developing high‐performance systems with an application to medical imaging and computer integrated surgery. Earlier, he worked at The National Stock Exchange of India for three years and did his undergraduate studies at the University of Pune in Computer Engineering.

Neal H. Walfield

Defense Date: Tuesday, October 4, 2016
Title: Prediction for Context‐Aware Applications
Primary Advisor: Christian Grothoff
Abstract: Context‐aware applications are programs that are able to improve their performance by adapting to the current conditions, which include the user's behavior, networking conditions, and charging opportunities. In many cases, the user's location is an excellent predictor of the context. Thus, by predicting the user's future location, we can predict the future conditions. In this talk, I will discuss the techniques that we developed to identify and predict the user's location over the next 24 hours with a minimum median accuracy of 80%. I will start by describing the user study that we conducted, and some salient conclusions from our analysis. These include our observation that cell phones sample the towers in their vicinity, which makes cell towers as‐is inappropriate for use as landmarks. Motivated by this observation, I will then present two techniques for processing the cell tower traces so that landmarks more closely correspond to locations, and cell tower transitions more closely correspond to user movement. Then, I will present our prediction engine, which is based on simple sampling distributions of the form f(t, c), where t is the predicted tower, and c is a set of conditions. The conditions that we considered include the time of the day, the day of the week, the current regime, and the current tower. Our family of algorithms, called TomorrowToday, achieves 89% prediction precision across all prediction trials for predictions 30 minutes in the future. This decreases slowly for predictions further in the future, and levels off for predictions approximately 4 hours in the future, at which point we achieve 80% prediction precision across all prediction trials up to 24 hours in the future. This represents a significant improvement over NextPlace, a well‐cited prediction algorithm based on non‐linear time series, which achieves appropriately 80% prediction precision (self reported) for predictions 30 minutes in the future, but, unlike our predictors, which try all prediction attempts, NextPlace only attempts 7% of the prediction trials on our data set.
Biography: Neal is a PhD study at Johns Hopkins University and is being advised by Christian Grothoff. Neal's main academic interests are in systems and security. While finishing his PhD, he worked part time on GnuPG, a widely used encryption and data authentication program.

William Gray Roncal

Defense Date: Thursday, September 22, 2016
Title: Enabling Scalable Neurocartography: Images to Graphs for Discovery
Primary Advisor: Greg Hager
Abstract: In recent years, advances in technology have enabled researchers to ask new questions predicated on the collection and analysis of big datasets that were previously too large to study. More specifically, many fundamental questions in neuroscience require studying brain tissue at a large scale to discover emergent properties of neural computation, consciousness, and etiologies of brain disorders. A major obstacle is to construct larger, more detailed maps (e.g., structural wiring diagrams) of the brain, known as connectomes. Although raw data exist, challenges remain in both algorithm development and scalable image analysis to enable access to the knowledge inside. This dissertation develops, combines and tests state-of-the-art algorithms to estimate graphs and glean other knowledge across the six orders of magnitude from millimeter-scale magnetic resonance imaging to nanometer-scale electron microscopy. This work enables scientific discovery across the community and contributes to the tools and services offered by NeuroData and the Open Connectome Project. Contributions include creating, optimizing and evaluating the first known fully-automated brain graphs in electron microscopy data and magnetic resonance imaging data; pioneering approaches to generate knowledge from X-ray tomography imaging; and identifying and solving a variety of image analysis challenges associated with building graphs suitable for discovery. These methods were applied across diverse datasets to answer questions at scales not previously explored.
Biography: William Gray Roncal is a Project Manager in the Research and Exploratory Development Department at the Johns Hopkins University Applied Physics Laboratory (APL). In 2005, Will received a Master of Electrical Engineering from the University of Southern California. He earned his Bachelor of Electrical Engineering Degree from Vanderbilt University in 2003. He is a member of the Society for Neuroscience, Eta Kappa Nu, and Tau Beta Pi. Will applies algorithms to solve big data challenges at the intersection of multiple disciplines. Although he has experience in diverse environments ranging from undersea to outer space, he currently works in connectomics, an emerging discipline within neuroscience that seeks to create a high-resolution map of the brain.

Samuel Carliles

Defense Date: Friday, September 9, 2016
Title: Tricks with Random Forest Regression
Primary Advisor: Alex Szalay
Abstract: Random Forests are a convenient option for performing non‐parametric regression. I will discuss a novel approach to error estimation using Random Forests; the relation of Random Forest regression to kernel regression, which offers a principled approach to configuration parameter selection resulting in lower regression error; and algorithmic considerations which yield asymptotically faster training than what is available in the de facto standard R implementation.
Biography: Samuel Carliles is a graduate student in the Department of Computer Science. He has a BS and an MS in Computer Science from Johns Hopkins, and currently works as a Data Scientist at AppNexus, Inc.

Da Zheng

Defense Date: Tuesday, July 12, 2016
Title: FlashX: Massive Data Analysis Using Fast I/O
Primary Advisor: Randal Burns
Abstract: With the explosion of data and the increasing complexity of data analysis, large‐ scale data analysis imposes significant challenges in systems design. While current research focuses on scaling out to large clusters, these scale‐out solutions introduce a significant amount of overhead. This thesis is motivated by the advance of new I/O technologies such as flash memory. Instead of scaling out, we explore efficient system designs in a single commodity machine with nonuniform memory architec‐ ture (NUMA) and scale to large datasets by utilizing commodity solid‐state drives (SSDs). This thesis explores the impact of the new I/O technologies on large‐scale data analysis. Instead of implementing individual data analysis algorithms for SSDs, we develop a data analysis ecosystem called FlashX to target a large range of data analysis tasks. FlashX includes three subsystems: SAFS, FlashGraph and Flash‐ Matrix. SAFS is a user‐space filesystem optimized for a large SSD array to deliver maximal I/O throughput from SSDs. FlashGraph is a general‐purpose graph analy‐ sis framework that processes graphs in a semi‐external memory fashion, i.e., keeping vertex state in memory and edges on SSDs, and scales to graphs with billions of vertices by utilizing SSDs through SAFS. FlashMatrix is a matrix‐oriented program‐ ming framework that supports both sparse matrices and dense matrices for general data analysis. Similar to FlashGraph, it scales matrix operations beyond memory capacity by utilizing SSDs. We demonstrate that with the current I/O technologies FlashGraph and FlashMatrix in the (semi‐ )external‐memory meets or even exceeds state‐of‐the‐art in‐memory data analysis frameworks while scaling to massive datasets for a large variety of data analysis tasks.
Biography: Da Zheng received the BS degree in computer science from Zhejiang University, China, in 2006, and the MS degree in computer science from Ecole polytechnique federale de Lausanne, in 2009. Since 2010, he is a PhD student of computer science at Johns Hopkins University. His research interests include high‐performance computing, large‐scale data analysis systems, and large‐scale machine learning.

Michael Rushanan

Defense Date: Wednesday, May 11, 2016
Title: An Empirical Analysis of Security and Privacy in Health and Medical System
Primary Advisor: Avi Rubin
Abstract: Healthcare reform, regulation, and adoption of technology such as wearables are substantially changing both the quality of care and how we receive it. For example, health and fitness devices contain sensors that collect data, wireless interfaces to transmit data, and cloud infrastructures to aggregate, analyze, and share data. FDA‐defined class III devices such as pacemakers will soon share these capabilities. While technological growth in health care is clearly beneficial, it also brings new security and privacy challenges for systems, users, and regulators. We group these concepts under health and medical systems to connect and emphasize their importance to healthcare. Challenges include how to keep user health data private, how to limit and protect access to data, and how to securely store and transmit data while maintaining interoperability with other systems. The most critical challenge unique to healthcare is how to balance security and privacy with safety and utility concerns. Specifically, a life‐critical medical device must fail‐open (i.e., work regardless) in the event of an active threat or attack. This dissertation examines some of these challenges and introduces new systems that not only improve security and privacy but also enhance workflow and usability. Usability is important in this context because it is a deterrence for a secure system, thus, lending it to be improperly used or circumvented. We present this concern and our solution in its respective chapter. Each chapter of this dissertation presents a unique challenge, or unanswered question, and solution based on empirical analysis. We present a survey of related work in embedded health and medical systems. The academic and regulatory communities greatly scrutinize the security and privacy of these devices because of their primary function of providing critical care. What we find is that securing embedded health and medical systems is hard, done incorrectly, and is analogous to non‐embedded health and medical systems such as hospital servers, terminals, and BYOD devices. We perform an analysis of Apple iMessage which both implicates BYOD in healthcare and secure messaging protocols used by health and medical systems. We analyze direct memory access engines, a special‐purpose piece of hardware to transfer data into and out of main memory, and show that we can chain together memory transfers to perform arbitrary computation. This result potentially affects all computing systems used for healthcare. We also examine HTML5 web workers as they provide stealthy computation and covert communication. This finding is relevant to web applications such as electronic health record portals. We design and implement two novel and secure health and medical systems. One is a wearable device that addresses the problem of authenticating a user (e.g., doctor) to a terminal in a usable way. The other is a light‐weight and low‐cost wireless device we call Beacon+. This device extends the design of Apple’s iBeacon specification with unspoofable, temporal, and authenticated advertisements; of which, enables secure location sensing applications that could improve numerous healthcare processes.
Biography: Michael Rushanan is a Ph.D. candidate in Computer Science at Johns Hopkins University. He is advised by Avi Rubin, and he is a member of the Health and Medical Security lab. His research interests include systems security, health information technology security, privacy, and applied cryptography. His hobbies include embedded system design and implementation (e.g., Arduino), mobile application development (i.e., Android), and programming.

Haluk Tokgozoglu

Defense Date: Wednesday, May 4, 2016
Title: Modeling the Representation of Medial Axis Structure in Human Ventral Pathway Cortex
Primary Advisor: Greg Hager
Abstract: Computational modeling of the human brain has long been an important goal of scientific research. The visual system is of particular interest because it is one of the primary modalities by which we understand the world. One integral aspect of vision is object representation, which plays an important role in machine perception as well. In the human brain, object recognition is a part of the functionality of the ventral pathway. In this work, we have developed a computational and statistical techniques to characterize object representation among this pathway. The understanding of how the brain represents objects is essential to developing models of computer vision that are truer to how humans perceive the world. In the ventral pathway, the lateral occipital complex (LOC) is known to respond to images of objects. Neural recording studies in monkeys have shown that the homologue for LOC represents objects as configurations of medial axis and surface components. In this work, we designed and implemented novel experiment paradigms and developed algorithms to test whether the human LOC represents medial axis structure as in the monkey models. We developed a data‐driven iterative sparse regression model guided by neuroscience principles in order to estimate the response pattern of LOC voxels. For each voxel, we modeled the response pattern as a linear combination of partial medial axis configurations that appeared as fragments across multiple stimuli. We used this model to demonstrate evidence of structural object coding in the LOC. Finally, we developed an algorithm to reconstruct images of stimuli being viewed by subjects based on their brain images. As a whole, we apply computational techniques to present the first significant evidence that the LOC carries information about the medial axis structure of objects, and further characterize its response properties.
Biography: Haluk Tokgozoglu received the Bachelors of Engineering in Computer Science and Engineering from Bilkent University in 2009, and the Masters of Science in Computer Science from Johns Hopkins University in 2012. He enrolled in the Computer Science Ph.D. program at Johns Hopkins University in 2010. His research focuses on Machine Learning, Computer Vision and Visual Neuroscience.

H. Tutkun Sen

Defense Date: Wednesday, March 9, 2016
Title: Robotic System and Co-manipulation Strategy for Ultrasound Guided Radiotherapy
Primary Advisor: Peter Kazanzides
Abstract: In this thesis, we propose a cooperative robot control methodology that provides real-time ultrasound-based guidance in the direct manipulation paradigm for image-guided radiation therapy (IGRT) in which a clinician and robot share control of a 3D ultrasound (US) probe. IGRT involves two main steps: (1) planning/simulation and (2) treatment delivery. The proposed US probe co-manipulation methodology has two goals. The first goal is to provide guidance to the therapists for patient setup on the treatment delivery days based on the robot position, contact force, and reference US image recorded during simulation. The second goal is the real-time target monitoring during fractionated radiotherapy of soft tissue targets, especially in the upper abdomen. We provide the guidance in the form of virtual fixtures, which are software-generated force and position signals applied to human operators that permit the operators to perform physical interactions, yet retain direct control of the task. The co-manipulation technique is used to locate soft-tissue targets with US imaging for radiotherapy, enabling therapists with minimal US experience to find an US image which has previously been identified by an expert sonographer on the planning day. Moreover, to compensate for soft tissue deformations created by the probe, we propose a novel clinical workflow where a robot holds the US probe on the patient during acquisition of the planning computerized tomography (CT) image, thereby ensuring that planning is performed on the deformed tissue. Our results show that the proposed cooperative control technique with virtual fixtures and US image feedback can significantly reduce the time it takes to find the reference US images, can provide more accurate US probe placement compared to finding the images free hand, and finally, can increase the accuracy of the patient setup, and thus, the radiation therapy.
Biography: H. Tutkun Şen received his B.S. degree in Mechanical Engineering with a double major in Electrical and Electronics Engineering from Middle East Technical University, Turkey in 2009 and 2010, respectively. In addition, he obtained a Master of Science in Computer Science from Johns Hopkins University in 2015. He has been a Michael J. Zinner Fellow (Brown Challenge Fellow in the Whiting School of Engineering) since 2010. He has been pursuing a Ph.D. in the department of Computer Science at Johns Hopkins University, advised by Dr. Peter Kazanzides and Dr. Russ Taylor since 2009. After completion of his PhD, Tutkun will begin work as a Control Systems Engineer at Verb Surgical Inc. in Mountain View, CA, where he will be responsible for performing system analysis and designing controllers for a new medical robotic system.
Back to top