Computer Science Student Defense
October 2, 2019
We present novel probabilistic models for exploring, predicting, and controlling health trajectory data. The models address two important challenges that we must face when learning from health trajectory data. Solutions to these two challenges are the unifying arc of the thesis. First, we must account for unexplained heterogeneity. In many diseases, two individuals with the same diagnosis and otherwise similar characteristics (e.g. age, sex, lifestyle, and so on) can express the disease in very different ways. Well-known diseases that exhibit unexplained heterogeneity include asthma and autoimmune diseases such as multiple sclerosis and lupus. The implication for applied machine learning is that we may not always have sufficient observed information to make accurate predictions (i.e. we are missing some important features, or inputs, to the model). One key contribution in this thesis is a framework for building random-effects models of health trajectories that help us to solve the unexplained heterogeneity problem. Second, we must consider how treatment policies affect what we learn from health trajectory data and the questions that our models can answer. We argue that many prediction problems in healthcare cannot be addressed using supervised learning algorithms, but instead need to be tackled using techniques that can make “what if?” predictions. The issue stems from a mismatch between how patients were treated in the training data and how they are treated at test-time once the model is deployed. Another key contribution of this thesis is a strategy for addressing this sensitivity to treatment policies at train-time that connects ideas from causal inference and machine learning.
Speaker Biography: Peter Schulam is a PhD candidate in the Computer Science Department at Johns Hopkins University where he is working with Professor Suchi Saria. His research interests lie at the intersection of machine learning, statistical inference, and healthcare with an emphasis on developing methods to support the personalized medicine initiative. Before coming to JHU, he received his MS from Carnegie Mellon’s School of Computer Science and his BA from Princeton University. He was awarded a National Science Foundation Graduate Research Fellowship and the Dean’s Centennial Fellowship within Johns Hopkins’ Whiting School of Engineering.
Gerald M. Masson Distinguished Lecture Series
October 8, 2019
Virtual assistants, providing a voice interface to web services and IoTs, can potentially develop into monopolistic platforms that threaten consumer privacy and open competition. This talk presents Almond as an open-source alternative.
Unlike existing commercial assistants, Almond can be programmed in natural language to perform new tasks. It lets users control who, what, when, where, and how their data are to be shared, all without disclosure to a third party.
Almond has a Write-Once-Run-Anywhere (WORA) skill platform: skills need to be written only once and can run automatically on other assistants. This helps level the playing field for new assistants.
Finally, Almond’s open-source technologies enable non-ML experts to develop natural language capabilities in their domains of interest. Through open-world collaboration, Almond can become the smartest virtual assistant.
Speaker Biography: Monica Lam is a Professor in the Computer Science Department at Stanford University since 1988. She received a B.Sc. from University of British Columbia in 1980 and a Ph.D. in Computer Science from Carnegie Mellon University in 1987. Monica is a Member of the National Academy of Engineering and Association of Computing Machinery (ACM) Fellow. She is a co-author of the popular text Compilers, Principles, Techniques, and Tools (2nd Edition), also known as Dragon book. She is the PI of the NSF Research Award “Autonomy and Privacy with Open Federated Virtual Assistants”. This project combines machine learning, natural language processing, programming systems, distributed systems, human-computer interaction, blockchain technology to create an open-source assistant that promotes consumer privacy and open competition. Her Almond research project is the first virtual assistant that lets users share their digital assets easily in natural language, without disclosing any information to a third party.
Gerald M. Masson Distinguished Lecture Series
October 10, 2019
Our ability to collect, manipulate, analyze, and act on vast amounts of data is having a profound impact on all aspects of society. Much of this data is heterogeneous in nature and interlinked in a myriad of complex ways. From information integration to scientific discovery to computational social science, we need machine learning methods that are able to exploit both the inherent uncertainty and the innate structure in a domain. Statistical relational learning (SRL) is a subfield that builds on principles from probability theory and statistics to address uncertainty while incorporating tools from knowledge representation and logic to represent structure. In this talk, I will give a brief introduction to SRL, present templates for common structured prediction problems, and describe modeling approaches that mix logic, probabilistic inference and latent variables. I’ll overview our recent work on probabilistic soft logic (PSL), a SRL framework for large-scale collective, probabilistic reasoning in relational domains. I’ll close by highlighting emerging opportunities (and challenges!!) in realizing the effectiveness of data and structure for knowledge discovery.
Speaker Biography: Lise Getoor is a professor in the Computer Science Department at the University of California, Santa Cruz and director of the Data, Discovery and Decisions Research Center at UC Santa Cruz. Her research areas include machine learning, data integration and reasoning under uncertainty, with an emphasis on graph and network data. She has over 250 publications and extensive experience with machine learning and probabilistic modeling methods for graph and network data. She is a Fellow of the Association for Artificial Intelligence, an elected board member of the International Machine Learning Society, serves on the board of the Computing Research Association (CRA), and was co-chair for ICML 2011. She is a recipient of an NSF Career Award and thirteen best paper and best student paper awards. She received her PhD from Stanford University in 2001, her MS from UC Berkeley, and her BS from UC Santa Barbara, and was a professor in the Computer Science Department at the University of Maryland, College Park from 2001-2013.
Computer Science Student Defense
October 16, 2019
This talk focuses on unsupervised dependency parsing—parsing sentences of a language into dependency trees without accessing the training data of that language. Different from most prior work that uses unsupervised learning to estimate the parsing parameters, we estimate the parameters by supervised training on synthetic languages. Our parsing framework has three major components: Synthetic language generation gives a rich set of training languages by mix-and-match over the real languages; surface-form feature extraction maps an unparsed corpus of a language into a fixed-length vector as the syntactic signature of that language; and, finally, language-agnostic parsing incorporates the syntactic signature during parsing so that the decision on each word token is reliant upon the general syntax of the target language.
The fundamental question we are trying to answer is whether some useful information about the syntax of a language could be inferred from its surface-form evidence (unparsed corpus). This is the same question that has been implicitly asked by previous papers on unsupervised parsing, which only assumes an unparsed corpus to be available for the target language. We show that, indeed, useful features of the target language can be extracted automatically from an unparsed corpus, which consists only of gold part-of-speech (POS) sequences. Providing these features to our neural parser enables it to parse sequences like those in the corpus. Strikingly, our system has no supervision in the target language. Rather, it is a multilingual system that is trained end-to-end on a variety of other languages, so it learns a feature extractor that works well. We show experimentally across multiple languages: (1) Features computed from the unparsed corpus improve parsing accuracy. (2) Including thousands of synthetic languages in the training yields further improvement. (3) Despite being computed from unparsed corpora, our learned task-specific features beat previous works’ interpretable typological features that require parsed corpora or expert categorization of the language
Speaker Biography: Dingquan Wang is a Ph.D. student working with Jason Eisner since 2014. His research interest is natural language processing (NLP) for low-resource languages. He received M.S. in Computer Science from Columbia University advised by Michael Collins and Rebecca Passonneau, and B.Eng from ACM Honored Class in Computer Science from Shanghai Jiao Tong University.
Association for Computing Machinery Lecture Series in Memory of Nathan Krasnopoler
October 22, 2019
An exploration of the research on “Grit”, interleaved with the story of writing Practical Object-Oriented Design in Ruby, and the tale of a horrendous bike ride. This talk will convince you that you can accomplish anything.
Speaker Biography: Sandi Metz, author of Practical Object-Oriented Design in Ruby and 99 Bottles of OOP, believes in simple code and straightforward explanations. She prefers working software, practical solutions and lengthy bicycle trips (not necessarily in that order) and writes, consults, and teaches about object-oriented design
October 31, 2019
The use of robotic surgical systems – in conjunction with image guidance – is changing the world of surgery to less and less invasive procedures. As the field of robotic surgery evolves, the integration of pre- and per-operative medical imaging data becomes essential. This advanced visualization is necessary to provide a complete vision that includes anatomical, physiological, and functional information on the structures located in the field of intervention. Artificial intelligence (AI) and large data can facilitate the management and presentation of this crucial information by highlighting blood vessels, or tumor margins of tissues that may be difficult to discern with the naked eye or on a screen. AI in diagnostic medical imaging is already the precursor to the application of this technology in healthcare, with significant advances that take advantage of in-depth learning technologies. Image-guided surgery (IGS) – sometimes considered a global positioning system – as interventional radiology is gaining importance because it allows operators to perform minimally invasive surgical procedures in the inner part of solid organ where previously large resections had to be performed. For all pipe-shaped structures such as the digestive tract or bronchial tubes, robotic endoscopy will be needed to overcome the limits of standard therapeutic procedures by offering operating capacities close to those of rigid endoscopy. The need for imaging – pre- and per-operative – for endoscopic intervention is increasing to allow for more guidance in particular in the event of in-depth intervention.
Speaker Biography: I am a radiologist, professor of medicine both in France and Canada. Much of my research focus on abdominal imaging, both diagnostic and interventional, with a special interest in the therapy of cancer. When I arrived at McGill in 2013, as Chair and Director of the Imaging Department, I created in collaboration with the teams of computer Sciences from McGill (Center for Intelligent Machine) a research laboratory focus on Artificial Intelligence. My research activities are at the interface between Oncology, Medical Imaging, and Computer Vision with the objective of developing new methods of tumor quantification by imaging, in order to select patients who are likely to respond to a specific treatment and to evaluate their response very early. I was recently recruited – through an international competition – by the University and the IHU of Strasbourg to take over the position of CEO of the IHU.
November 5, 2019
While deep neural networks (DNNs) have achieved remarkable success in computer vision and natural language processing, they are complex, heavily-engineered, often ad hoc systems, and progress toward understanding why they work (arguably a prerequisite for using them in consumer-sensitive and scientific applications) has been much more modest. To understand why deep learning works, Random Matrix Theory (RMT) has been applied to analyze the weight matrices of DNNs, including both production quality, pre-trained models and smaller models trained from scratch. Empirical and theoretical results clearly indicate that the DNN training process itself implicitly implements a form of self-regularization, implicitly sculpting a more regularized energy or penalty landscape. Building on results in RMT, most notably its extension to Universality classes of Heavy-Tailed matrices, and applying them to these empirical results, we develop a phenomenological theory to identify 5+1 Phases of Training, corresponding to increasing amounts of implicit self-regularization. For smaller and/or older DNNs, this implicit self-regularization is like traditional Tikhonov regularization, in that there appears to be a “size scale” separating signal from noise. For state-of-the-art DNNs, however, we identify a novel form of heavy-tailed self-regularization, similar to the self-organization seen in the statistical physics of disordered but strongly-correlated systems. We will describe validating predictions of this theory; how this can explain the so-called generalization gap; and how one can use it to develop novel metrics that predict trends in generalization accuracies for pre-trained production-scale DNNs. Coupled with work on energy landscape theory and heavy-tailed spin glasses, it also provides an explanation of why deep learning works.
Speaker Biography: Michael W. Mahoney is at the University of California at Berkeley in the Department of Statistics and at the International Computer Science Institute (ICSI). He works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning, including randomized matrix algorithms and randomized numerical linear algebra, geometric network analysis tools for structure extraction in large informatics graphs, scalable implicit regularization methods, computational methods for neural network analysis, and applications in genetics, astronomy, medical imaging, social network analysis, and internet data analysis. He received him PhD from Yale University with a dissertation in computational statistical mechanics, and he has worked and taught at Yale University in the mathematics department, at Yahoo Research, and at Stanford University in the mathematics department. Among other things, he is on the national advisory committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI), he was on the National Research Council’s Committee on the Analysis of Massive Data, he co-organized the Simons Institute’s fall 2013 and 2018 programs on the foundations of data science, he ran the Park City Mathematics Institute’s 2016 PCMI Summer Session on The Mathematics of Data, and he runs the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets. He is currently the Director of the NSF/TRIPODS-funded FODA (Foundations of Data Analysis) Institute at UC Berkeley.
Gerald M. Masson Distinguished Lecture Series
November 7, 2019
A recent New York Times article boldly stated that the Golden Age of Design is upon us. Our society is certainly in the midst of a great shift in how we view the world. In the past century, we have moved from the Age of Craft to the Industrial Age; we are currently on the cusp of the Age of Information. In the 20th century, innovations including the personal computer, the internet, smart phones, cloud computing, wearable computers and 3D and CNC printing have helped to radically change our conception of what we design. Today, designers no longer create products; they instead create platforms for open innovation.
This talk will reflect my walk through the discipline of design’s many eras and shifts, in order to understand this movement from designing products to designing platforms. The eras of user-centered design, experience design, service design, and systems design will be explored to better understand this migration. An alternative framing, product-service ecologies, will be introduced to stress a systemic and ecological view as a design approach to designing the products, services, environments, and platforms of today. A systemic view ensures that the designer can identify a need and understand the implications of designing something to impact the ecology in a positive way. A systemic view helps move the designer from problem solving to problem seeking, from modeling to understanding relationships, and from prototyping to perturbing the system to understand outcomes. It also ensures that designers are creating pragmatic and purposeful systems that will improve the state of today’s world.
Speaker Biography: Jodi Forlizzi is the Geschke Director and a Professor of Human-Computer Interaction in the School of Computer Science at Carnegie Mellon University. She is responsible for establishing design research as a legitimate form of research in HCI that is different from, but equally as important as, scientific and human science research. For the past 20 years, Jodi has advocated for design research in all forms, mentoring peers, colleagues, and students in its structure and execution, and today it is an important part of the CHI community.
Jodi’s current research interests include designing educational games that are engaging and effective, designing robots, AVs, and other technology services that use AI and ML to adapt to people’s needs, and designing for healthcare. Jodi is a member of the ACM CHI Academy and has been honored by the Walter Reed Army Medical Center for excellence in HRI design research. Jodi has consulted with Disney and General Motors to create innovative product-service systems.
November 12, 2019
If you’re a layperson who gets your news from public relations firms at major industry research centers, you may think that machine translation is solved, having reached “human parity” sometime in the past few years. But the reality is quite different. While translation accuracy is indistinguishable from that of humans by some definitions in certain narrow settings, claims of human parity rest on an impoverished definition of human capability. This talk will explore three lines of work whose collective goal is to provide neural machine translation systems with a few abilities that come quite naturally to us but are less natural in the modern translation paradigm, namely: translating under supplied constraints, producing diverse translation candidates, and evaluating output more robustly.
Speaker Biography: Matt Post is a research scientist at the Human Language Technology Center of Excellence at JHU, with appointments the Department of Computer Science and at the Center for Language and Speech Processing. He spends most of his time doing machine translation, but he has also worked on text classification, grammatical error correction, and human evaluation, and is interested in most topics in natural language processing. He is the Director of the ACL Anthology, and for many years has helped to organize the annual Conference on Machine Translation (WMT). He spent the 2017–2018 academic year working with Amazon Research in Berlin.
November 14, 2019
In this talk, I will discuss the theory of semiparametrics that I use to estimate causal effects at root-n rates. Estimators of these effects depend on estimators of nuisance parameters that can be estimated at rates slower than root-n; I provide sufficient conditions for these rates. I will seek advice on the machine learning estimation techniques that satisfy these conditions. I will illustrate the theory in the context of estimating the causal contrast of two competing treatments based on data from a comprehensive cohort study in which clinically eligible individuals are ﬁrst asked to enroll in a randomized trial and, if they refuse, are then asked to enroll in a parallel observational study in which they can choose treatment according to their own preference.
Speaker Biography: Daniel Scharfstein is Professor of Biostatistics at the Johns Hopkins Bloomberg School of Public Health. He joined the faculty at Johns Hopkins in 1997, after doctoral and post-doctoral training in Biostatistics at the Harvard School of Public Health. He is a Fellow of the American Statistical Association. He received the 1999 Snedecor Award for best paper in Biometry and was recognized as the 2010 Distinguished Alumni Award from the Harvard Department of Biostatistics. His research is focused on how to draw inference about treatment effects in the presence of selection bias.
Computer Science Student Defense
November 20, 2019
This thesis studies the problem of designing reliable control laws of robotic systems operating in uncertain environments. We tackle this issue by using stochastic optimization to iteratively refine the parameters of a control law from a fixed policy class, otherwise known as policy search. We introduce several new approaches to stochastic policy optimization based on probably approximately correct (PAC) bounds on the expected performance of control policies. These algorithms, referred to as PAC Robust Policy Search (PROPS), directly minimize an upper confidence bound on the expected cost of trajectories instead of employing a standard approach based on the expected cost itself. We compare the performance of PROPS to that of existing policy search algorithms on a set of challenging robot control scenarios in simulation: a car with side slip and a quadrotor navigating through obstacle-ridden environments. We show that the optimized bound accurately predicts future performance and results in improved robustness measured by lower average cost and lower probability of collision.
Next, we develop a technique for using robot motion trajectories to create a high quality stochastic dynamics model that is then leveraged in simulation to train control policies with associated performance guarantees. We demonstrate the idea by collecting dynamics data from a 1/5 scale agile ground vehicle, fitting a stochastic dynamics model, and training a policy in simulation to drive around an oval track at up to 6.5 m/s while avoiding obstacles. We show that the control policy can be transferred back to the real vehicle with little loss in predicted performance. Furthermore, we show empirically that simulation-derived performance guarantees transfer to the actual vehicle when executing a policy optimized using a deep stochastic dynamics model fit to vehicle data.
Finally, we develop an actor-critic variation of the PROPS algorithm which allows the use of both episode-based and step-based evaluation and sampling strategies. This variation of PROPS is more data efficient and is expected to compute higher quality policies faster. We empirically evaluate the algorithm in simulation on a challenging robot navigation task using a high-fidelity deep stochastic model of an agile ground vehicle and on a benchmark set of continuous control tasks. We compare its performance to the original trajectory-based PROPS.
Speaker Biography: Matt Sheckells received his B.S. in Computer Science and Physics from the Johns Hopkins University in 2014. He stayed at Johns Hopkins to complete his Ph.D., receiving the Computer Science Department Dean’s Fellowship and the WSE-APL Fellowship. His research in the Autonomous Systems, Control, and Optimization Lab focused on planning and controls for robotic systems, including flying vehicles and high-speed, off-road vehicles. During his Ph.D., Matt worked as a teaching assistant and lectured for the Applied Optimal Control and Non-linear Control and Planning in Robotics courses. As part of JHU’s Team CoSTAR, Matt won the 2016 KUKA Innovation Award. Then, he spent the summer of 2016 working as a Software Intern at Zoox, Inc.
Computer Science Student Defense
November 21, 2019
We introduce a modular ultrasound paradigm, focusing on its use of ultrasound thermometry. Thermotherapy is a medical procedure by which thermal energy is delivered to a target to induce desired clinical outcomes. It has been applied for various medical treatments such as ablation and hypothermia therapy. To achieve precise and reproducible results in thermotherapy, temperature monitoring during the procedure is particularly important. To provide temperature information for thermotherapy, we develop an ultrasound thermal monitoring method using a speed of sound tomographic approach coupled with a biophysical heat diffusion model. The system only requires simple hardware additions, such as external ultrasound sensors, to implement temperature monitoring at a reduced cost. Deep learning approaches are also used to obtain thermal images by training temperature distribution models using corresponding ultrasound data changes. Temperature images are achieved by leveraging only few ultrasound elements during thermotherapy. The modular ultrasound concept can also be extended for single-element ultrasound imaging and surgical tool tracking.
Speaker Biography: Younsu Kim joined MUSIIC (Medical UltraSound Imaging and Intervention Collaboration) research laboratory at Johns Hopkins university since 2014. Prior to joining Ph.D program, he worked as a research engineer at department of advanced technology in LG Electronics Inc, Korea. He received his B.E. degree in micro-electrical engineering from Tsinghua University, Beijing, China, and he earned a masters degree in electrical and computer engineering from Johns Hopkins University, Baltimore, US. His research interests include ultrasound thermal monitoring and interventional ultrasound-guided technologies.
Computer Science Student Defense
December 6, 2019
With the recent widespread availability of electronic health record data, there are new opportunities to apply data-driven methods to clinical problems. This has led to increasing numbers of publications proposing and validating machine learning (ML) methods for clinical applications like risk prediction and treatment recommendations. However, despite these methods often achieving higher accuracy than traditional rule-based risk scores, few have been deployed and integrated into clinical practice. Moreover, those that have been are often perceived as nuisances and/or adding little clinical value.
This dissertation demonstrates an approach to translating an ML model into a comprehensive clinical support system, taking sepsis, a dysregulated host response to infection that has severe mortality and morbidity, as an example condition. We take an integrated approach that incorporates technical, clinical, and human factors perspectives. First, we developed a model to predict sepsis from retrospective data and improve the quality of predictions by accounting for the presence of confounding comorbidities during model training. Second, we designed and deployed a live sepsis alert in a hospital setting and iteratively identified key design elements to provide clinicians with relevant alerts that fit with the existing clinical workflow. Finally, a human factors approach was used to understand how clinicians incorporate insights from an ML-based system into their clinical practice and what aspects of the system facilitate or hinder building trust in system predictions. Overall, our findings emphasize that model performance is not enough to achieve clinical success and we propose several strategies for designing systems that address the unique challenges of deploying ML systems in a clinical setting.
Speaker Biography: Katie Henry is a PhD candidate in the department of computer science at Johns Hopkins University, where she works on problems at the intersection of machine learning and medicine. Prior to joining JHU, she received her BS/BA in computer science and linguistics from the University of Chicago in 2013. She was awarded a National Science Foundation Graduate Research Fellowship.
Computer Science Student Defense
December 19, 2019
Robot-assisted surgery has enabled scalable, transparent capture of high-quality data during operation, and this has in turn led to many new research opportunities. Among these opportunities are those that aim to improve the objectivity and efficiency of surgical training, which include making performance assessment and feedback more objective and consistent; providing more specific or localized assessment and feedback; delegating this responsibility to machines, which have the potential to provide feedback in any desired abundance; and having machines go even further, for example by optimizing practice routines, in the form of a virtual coach. In this thesis, we focus on a foundation that serves all of these objectives: automated surgical activity recognition, or in other words the ability to automatically determine what activities a surgeon is performing and when those activities are taking place.
First, we introduce the use of recurrent neural networks (RNNs) for localizing and classifying surgical activities from motion data. Here, we show for the first time that this task is possible at the level of maneuvers, which unlike the activities considered in prior work are already a part of surgical training curricula. Second, we investigate unsupervised learning using surgical motion data: we show that predicting future motion from past motion with RNNs, using motion data alone, leads to meaningful and useful representations of surgical motion. This approach leads to the discovery of surgical activities from unannotated data, and to state-of-the-art performance for querying a database of surgical activity using motion-based queries. Finally, we depart from a common yet limiting assumption in nearly all prior work on surgical activity recognition: that annotated training data, which is difficult and expensive to acquire, is available in abundance. We demonstrate for the first time that both gesture recognition and maneuver recognition are feasible even when very few annotated sequences are available; and that future-prediction based representation learning, prior to the recognition phase, yields significant performance improvements when annotated data is scarce.
Speaker Biography: Robert DiPietro is a PhD student at Johns Hopkins University in the Computer Science Department, where he is advised Gregory D. Hager. His current research focuses on unsupervised representation learning and data-efficient segmentation and classification for time-series data, primarily within the domain of robot-assisted surgery. Before joining Hopkins, Robert obtained his BS in applied physics and his MS in electrical engineering at Northeastern University, and worked for 3 years as an associate research staff member at MIT Lincoln Laboratory.