Spring 2020

Video Recording >>

January 21, 2020

Medical imaging modalities combined with powerful image processing algorithms are emerging as an essential component of clinical routine to enable effective triage or guide minimally invasive treatment. Recent advances in computer vision, including leaps in machine learning systems and augmented reality technology, fuel cutting edge research on contextual and task-aware computer assistance solutions that cater to the physicians’ needs to enable improved clinical decision making. In this talk, I will highlight some of our recent work on human-centric end-to-end systems that assist clinicians in improving on-task performance, with examples ranging from task-aware image acquisition to dynamic augmented reality environments.

Speaker Biography: Mathias Unberath is an Assistant Research Professor in the Department of Computer Science at Johns Hopkins University with affiliations to the Laboratory for Computational Sensing and Robotics and the Malone Center for Engineering in Healthcare. He has created and is currently leading the ARCADE research group on Advanced Robotics and Computationally AugmenteD Environments, which focuses on computer vision, machine learning, and augmented reality and the application thereof to medical imaging, surgical robotics, and clinician-centric assistance systems. Previously, Mathias was a postdoctoral fellow in the Laboratory for Computational Sensing and Robotics at Hopkins and completed his PhD in Computer Science at the Friedrich-Alexander-Universität Erlangen-Nürnberg from which he graduated summa cum laude in 2017. While completing a Bachelor’s in Physics and Master’s in Optical Technologies at FAU Erlangen, Mathias studied at the University of Eastern Finland as ERASMUS scholar in 2011 and joined Stanford University as DAAD fellow throughout 2014.

Video Recording >>

February 4, 2020

Personal technologies for everyday health management have the potential to transform healthcare by empowering individuals to engage in their own care, scaffolding access to critical information, and supporting patient-centered decision-making. Currently, many personal health tools often focus only on a single task or isolated event. However, chronic illnesses are characterized by information needs and challenges that shift over time; thus, these illnesses are better defined as a dynamic trajectory than a series of singular events.

In this talk, I discuss my work designing and implementing novel computing systems that: 1) support chronic illness trajectories and 2) reduce patients’ barriers to accessing information necessary for effective personal health management. I create technologies that have the flexibility and robustness to conform to individuals’ evolving health situations. By connecting individuals with personalized and actionable feedback, my approach can lead to long-term engagement with health tools. This is evidenced by participants’ motivations for using these systems as well as longitudinal usage patterns. Using results from longitudinal field deployments, I demonstrate the ability for personalized and adaptive health tools to facilitate patients’ proactive health management and engagement in their care. I also discuss opportunities for future work: looking at personalization as a strategy for addressing health disparities, designing for illness trajectories in which uncertainty is paramount, and integrating machine learning models into clinical workflows.

Speaker Biography: Dr. Maia Jacobs is a postdoctoral fellow at Harvard University’s Center for Research on Computation and Society. Jacobs’ research focuses on the development and assessment of novel approaches for health information tools to support chronic disease management. She completed her PhD in Human Centered Computing at Georgia Institute of Technology with the thesis, “Personalized Mobile Tools to Support the Cancer Trajectory”.

Jacobs’ research was recognized in the 2016 report to the President of the United States from the President’s Cancer Panel, which focuses on improving cancer-related outcomes. Her research has been funded by the National Science Foundation, the National Cancer Institute and the Harvard Data Science Institute. Jacobs was awarded the iSchools Doctoral Dissertation Award and the Georgia Institute of Technology College of Computing Dissertation Award. Jacobs was also recognized as a Foley Scholar, the highest award given by the GVU center to PhD candidates at Georgia Tech. Prior to joining Georgia Tech, Maia received a B.S. degree in Industrial and Systems Engineering from the University of Wisconsin-Madison and worked as a User Experience Specialist for Accenture Consulting.

Video Recording >>

February 11, 2020

Some claim AI is the “new electricity” due to its growing significance and ubiquity. My research investigates this vision from an HCI perspective: How can we situate this remarkable technology in ways people perceive as valuable and natural? How could we form a symbiotic relationship between AI systems and their users, to do things neither can do on their own? In this talk, I will discuss a number of research projects that systematically investigate these questions. Projects include the designs of clinical decision-support systems that can effectively collaborate with doctors in making life-and-death decisions and an investigation of how Natural Language Generation systems might seamlessly serve authors’ communicative intent. Each project engages stakeholders in their real-world contexts and addresses a critical challenge in transitioning AI from the research lab to the real world. Based upon this body of work and my studies of industry practice, I propose a framework laying out the problem space of human-AI interaction design. I discuss our early work and the strategic potential in supporting effective collaboration between HCI and AI expertise.

Speaker Biography: Qian Yang is a Human-Computer Interaction (HCI) researcher and a Ph.D. candidate at the School of Computer Science at Carnegie Mellon University. Her research draws together theories and methods from design, the social sciences, and machine learning to advance human-AI interaction. She is best known for designing decision support systems that effectively aided physicians in making critical clinical decisions.

Her work has been supported by the National Institute of Health, the National Science Foundation, and the Department of Health and Human Services. During her Ph.D., she published fifteen peer-reviewed publications on the topic of human-AI interaction at premiere HCI research venues. Four of these won paper awards. She is the recipient of a Digital Health fellowship from the Center for Machine Learning and Health, a Microsoft Research Dissertation Grant, and an Innovation by Design Award from FastCompany. Her work was featured on global media outlets. This spring she will be speaking at SXSW on how to innovate AI products and services.

Video Recording >>

February 13, 2020

Modern machine learning, especially deep learning, faces a fundamental question: how to create models that efficiently deliver reliable predictions to meet the requirements of diverse applications running on various systems. This talk will introduce reuse-centric optimization, a novel direction for addressing the fundamental question. Reuse-centric optimization centers around harnessing reuse opportunities for enhancing computing efficiency. It generalizes the principle to a higher level and a larger scope through a synergy between programming systems and machine learning algorithms. Its exploitation of computation reuse spans across the boundaries of machine learning algorithms, implementations, and infrastructures; the types of reuse it covers range from pre-trained Neural Network building blocks to preprocessed results and even memory bits; the scopes of reuse it leverages go from training pipelines of deep learning to variants of Neural Networks in ensembles; the benefits it generates extend from orders of magnitude faster search for a good smaller Convolution Neural Network (CNN) to the elimination of all space cost in protecting parameters of CNNs.

Speaker Biography: Hui Guan is a Ph.D. candidate in the Department of Electrical and Computer Engineering, North Carolina State University, working with Dr. Xipeng Shen and Dr. Hamid Krim. Her research lies in the intersection between Machine Learning and Programming Systems, with a focus on improving Machine Learning (e.g., speed, scalability, reliability) through innovations in algorithms and programming systems (e.g., compilers, runtime), as well as leveraging Machine Learning to improve High-Performance Computing.

February 18, 2020

I will describe how to use data science methods to understand and reduce inequality in two domains: criminal justice and healthcare. First, I will discuss how to use Bayesian modeling to detect racial discrimination in policing. Second, I will describe how to use machine learning to explain racial and socioeconomic inequality in pain.

Speaker Biography: Emma Pierson is a PhD student in Computer Science at Stanford, supported by Hertz and NDSEG Fellowships. Previously, she completed a master’s degree in statistics at Oxford on a Rhodes Scholarship. She develops statistical and machine learning methods to study two deeply entwined problems: reducing inequality and improving healthcare. She also writes about these topics for broader audiences in publications including The New York Times, The Washington Post, FiveThirtyEight, and Wired. Her work has been recognized by best paper (AISTATS 2018), best poster (ICML Workshop on Computational Biology), and best talk (ISMB High Throughput Sequencing Workshop) awards, and she has been named a Rising Star in EECS and Forbes 30 Under 30 in Science.

Video Recording >>

March 3, 2020

Computers seventy years ago knew only what could be loaded into their memory, but today they can access an entire internet of information. This expansion of knowledge has made them indispensable tools and assistants. However, even today, computing devices know little about the physical world, especially the environment immediately around them due to a lack of perceptual capabilities. For this reason, they can tell you more about medieval literature and the traffic in Tokyo than the home in which they reside. This lack of perception limits how smart and useful they can be, especially in our everyday tasks that could be augmented with information and interactivity.

In this talk, I will present my research on sensing approaches that boost computer perception of the immediate physical world. Specifically, I have explored sensing technologies that allow one deployed sensor to cover a wide area for user activity and event recognition as well as sensing technologies that enable the manufacture of smarter everyday objects. Together, these sensing technologies allow computers to monitor the state, count, and intensity of activities, which in turn can enable higher-order applications such as personal informatics, accessibility, digital health, sustainability, and beyond.

Speaker Biography: Yang Zhang is a PhD candidate in the Human-Computer Interaction Institute at Carnegie Mellon University and is also a Qualcomm Innovation Fellow. His research lies in the technical aspects of Human Computer Interaction (HCI) with a focus on sensing technologies that enhance computing devices with knowledge of the physical world around them. His research has received 2 best paper and 4 honorable mention awards at top venues, and extensive media coverage from leading media outlets such as MIT Technology Review, Engadget, and The Wall Street Journal. As much of his research is highly applied, it has led to collaborations with industry partners, such as Facebook Reality Labs, Apple, and Microsoft Research. More information can be found on his website: https://yangzhang.dev.

Video Recording >>

March 5, 2020

Interactive devices are an essential component of any computing system. However, those that are widely used today (e.g., a touchscreen) do not fit well with the new forms of computing in the era of “Smart Things” and beyond, where computing is no longer restricted to a square machine or flat surface, but is instead carried out on smart everyday “things” (curved or flat, soft or rigid) that are at home, at a workspace, or worn on the body. As such new interactive devices and software systems need to be developed to allow a wide adoption of this technology for significant societal benefits.

In this talk, I will present three projects to exemplify our efforts in this space by demonstrating our approaches to overcome some of the major challenges we are facing in hardware (e.g., sensing), software (e.g., user interface), and energy consumption. For sensing, I will present a soft sensor, developed for contextual interactions on interactive fabrics based on the precise detection and recognition of conductive objects that are commonly found in households and workplaces. For user interface, I will introduce an on-fingertip keyboard, optimized for eyes-free typing using micro finger gestures. For energy consumption, I will present a self-powered module for gesture recognition that utilizes solar cells for both energy harvesting and gesture sensing. I will also describe the visions behind these three lines of research.

Speaker Biography: Xing-Dong Yang is an Assistant Professor of Computer Science at Dartmouth College. His research is broadly in Human-Computer Interaction (HCI), where he investigates future interactive systems and brings interactivity to everyday objects for social good. Xing-Dong’s work is recognized through Best Paper award at ACM UIST 2019 and Honorable Mention awards at ACM CHI 2019, 2018, 2016, 2010 and ACM MobileHCI 2009. Aside from academic publications, Xing-Dong’s work attracts major public interest via news coverage from a variety of media outlets with different mediums, including TV (e.g., Discovery Daily Planet), print (e.g., The Wall Street Journal, Forbes), and Internet News (e.g., MIT Technology Review, New Scientist).

Video Recording >>

March 10, 2020

Hardware provides the foundation of trust for computer systems. Defects in hardware designs routinely cause vulnerabilities that are exploitable by malicious software and compromise the security of the entire system. While mature hardware validation tools exist, they were primarily designed for checking functional correctness. How to systematically detect security-critical defects remains an open and challenging question.

In this talk, I will present my research on developing formal methods and practical tools for automated hardware security validation. First, I will discuss how to validate a hardware design given some security properties. I will present Coppelia, which is an end-to-end tool that designs hardware-oriented backward symbolic execution to find violations and generate exploit programs. Second, I will discuss how to efficiently build security properties. I will introduce ​SCIFinder, a methodology that leverages known vulnerabilities to mine and learn security invariants. I will then describe Transys, which automatically translates security properties across similar or different generations of hardware designs. These solutions have been applied to open-source RISC-V and OR1k CPUs, and have detected both existing and new vulnerabilities. I will conclude my talk by describing future directions on further improving formal methods to validate the security of modern hardware.

Speaker Biography: Rui Zhang is a PhD candidate in the Computer Science Department at the University of North Carolina at Chapel Hill. Her research interest lies in the areas of hardware security and formal methods, with a focus on developing automated systems and tools for detecting vulnerabilities and validating the security of hardware designs. Her research has been recognized with a best paper award nomination at MICRO and a candidate of Top Picks in Hardware and Embedded Security. She is an invited participant at the Rising Stars in EECS Workshop and the Rising Stars in Computer Architecture Workshop. She received her master’s degree from Columbia University in 2015 and her bachelor’s degree from Peking University in 2013.

March 12, 2020

The ongoing boom in personal health technologies offers new potential to support people in collecting and interpreting data about their own health and well-being. However, there is a mismatch between what technology currently delivers (e.g., step counts, sleep scores) versus what people expect from it (i.e., personal health insights and recommendations). Current technologies fall short of their potential due to complex and interrelated challenges (e.g., in meeting personal needs, in data quality, in their integration into clinical practice). A holistic approach is therefore necessary, focusing on end-to-end design that understands the individual, their environments, and their contexts. My research focuses on human-centered approaches to collecting, interacting with, and using novel health data toward improving human well-being through personalized insights and recommendations. I explore this in two major thrusts of research: (1) I build specialized tools to enable people living with chronic conditions to better leverage their personal health data in understanding and managing their health; and (2) Through the process of creating and studying such tools, I systematize frameworks and design recommendations to assist future developers in designing personal health tools.

Speaker Biography: Ravi Karkar is a PhD Candidate at the University of Washington’s Paul G. Allen School of Computer Science & Engineering. His research has been published in leading human-computer interaction and medical venues, including CHI, UbiComp, DIS, JAMIA, and JHIR, receiving two Best Paper Honorable Mention awards (CHI 2017, DIS 2018). The research has also garnered strong interest from clinicians, researchers, and startups seeking to incorporate it in their work and has contributed to a patent application and several successful grants (a UW Innovation Award, an NIH R01, an NIH R21). He has served on the program committees for Pervasive Health and Graphics Interface, and as a student coordinator for DUB (the University of Washington’s cross-campus initiative in human-computer interaction and design research and education).

March 24, 2020

There has been a recent revolution in cryptography due to the introduction of lattice-based constructions. These are cryptographic schemes whose security relies on the presumed hardness of certain computational problems over ubiquitous (and beautiful) geometric objects called lattices. Their many applications (e.g., fully homomorphic encryption) and security against adversaries with quantum computers has created some urgency to deploy lattice-based schemes widely over the next few years. For example, the National Institute of Standards and Technology is in the process of standardizing lattice-based cryptography, and Google has already implemented such a scheme in its Canary browser.

The security of the proposed schemes relies crucially on the assumption that our current best algorithms (both classical and quantum) for the relevant computational lattice problems cannot be improved by even a relatively small amount. I will discuss the state of the art in the study of this assumption. In particular, I will describe the fastest known algorithms for these problems (and potential directions to improve them) as well as a recent series of hardness results that use the tools of fine-grained complexity to provide strong evidence for the security of lattice-based cryptography.

Speaker Biography: Noah Stephens-Davidowitz is the Microsoft Research Fellow at the Simons Institute in Berkeley. He has also been a postdoctoral researcher at MIT, Princeton, and the Institute for Advanced Study. He received his PhD from NYU, where his dissertation won the Dean’s Outstanding Dissertation Award in the sciences.

Much of Noah’s research uses the tools of theoretical computer science to answer fundamental questions about the security of widely deployed real-world cryptography, particularly post-quantum lattice-based cryptography. He is also interested more broadly in theoretical computer science, cryptography, and geometry.

March 26, 2020

Modern systems are mainly composed of IoT devices and Smartphones. Most of these devices use ARM processors, which, along with flexible licensing, have new security architecture features, such as ARM TrustZone, that enables execution of secure applications in an entrusted environment. Furthermore, well-supported, extensible, open-source embedded operating systems like Android allow the manufactures to quickly customize their operating system with device drivers, thus reducing the time-to-market.

Unfortunately, the proliferation of device vendors and race to the market has resulted in poor quality low-level system software containing critical security vulnerabilities. Furthermore, the patches for these vulnerabilities get merged into the end-products with a significant delay resulting in the Patch Gap, which causes the privacy and security of billions of users to be at risk.

In this talk, I will show that the existing techniques are inadequate to find the security issues and how, with certain well-defined optimizations, we can precisely find security issues. Second, I will present my solution to the problem of Patch Gap by showing a principled approach to port patches to vendor product repositories automatically. Finally, I will present my ongoing work to automatically port C to Checked C, which provides a low overhead, backward-compatible, and memory-safe C alternative that could be used on modern systems to prevent security vulnerabilities.

Speaker Biography: Aravind Machiry is a Ph.D. candidate in Computer Science at the University of California, Santa Barbara. He is a recipient of various awards, such as the Symantec Research Labs Fellowship and UCSB Graduate Division Dissertation Fellowship. His work spans across various aspects of System security and Program analysis. His research resulted in various Open-source security tools and several Common Vulnerability Exposures (CVEs) in critical system software such as kernel drivers, Trusted Execution Environments, and bootloaders. His research is also academically recognized with awards such as Distinguished Paper Award, Internet Defense Prize, an invitation to present at CSAW Applied Research Competition. Previously, Aravind received his Master’s degree in Information Security from the Georgia Institute of Technology.

March 31, 2020

Deep learning (DL) is a powerful approach to modeling complex and large scale data. However, DL models lack interpretable quantities and calibrated uncertainty. In contrast, probabilistic graphical modeling (PGM) provides a framework for formulating an interpretable generative process of data and a way to express uncertainty about what we do not know. How can we develop machine learning methods that bring together the expressivity of DL with the interpretability and calibration of PGM to build flexible models endowed with an interpretable latent structure that can be fit efficiently? I call this line of research deep probabilistic graphical modeling (DPGM). In this talk, I will discuss my work on developing DPGM both on the modeling and algorithmic fronts. In the first part of the talk I will show how DPGM enables learning document representations that are highly predictive of sentiment without requiring supervision. In the second part of the talk I will describe entropy-regularized adversarial learning, a scalable and generic algorithm for fitting DPGMs.

Speaker Biography: Adji Bousso Dieng is a PhD Candidate at Columbia University where she is jointly advised by David Blei and John Paisley. Her research is in Artificial Intelligence and Statistics, bridging probabilistic graphical models and deep learning. Dieng is supported by a Dean Fellowship from Columbia University. She won a Microsoft Azure Research Award and a Google PhD Fellowship in Machine Learning. She was recognized as a rising star in machine learning by the University of Maryland. Prior to Columbia, Dieng worked as a Junior Professional Associate at the World Bank. She did her undergraduate studies in France where she attended Lycee Henri IV and Telecom ParisTech–France’s Grandes Ecoles system. She spent the third year of Telecom ParisTech’s curriculum at Cornell University where she earned a Master in Statistics.

April 30, 2020

A recurring task at the intersection of humanities and computational research is pairing data collected by a traditional scholar with an appropriate machine learning technique, ideally in a form that creates minimal burden on the scholar while yielding relevant, interpretable insights.

In this talk, I first introduce myself and explain how my interests, educational background and previous research has led to a focus on this general task. Next, I describe a specific effort to design a graph-aware autoencoding model of relational data that can be directly applied to a broad range of humanities research, and easily extended with improved neural (sub)architectures. I then present results from an ongoing historical study of the post-Atlantic slave trade in Baltimore, illustrating several ways it benefits traditional scholars. Finally, I briefly mention a few ongoing studies with collaborators from various departments, and a rough outline of a course aimed at a mixture of CS and Krieger students.

Speaker Biography: Dr. Lippincott has been a research scientist in the Johns Hopkins Human Language Technology Center of Excellence since receiving his Ph.D. from the University of Cambridge in 2015 under the supervision of Anna Korhonen. He spent two years prior to graduation as research faculty at Columbia University, working with Owen Rambow and Nizar Habash. His ongoing work at the HLTCOE includes text classification, sentiment analysis, and unsupervised modeling of semi-structured, heterogeneous data.