Published:
Category:
Headshot of Michael Oberst.
Michael Oberst

Michael Oberst joins the Johns Hopkins University as an assistant professor of computer science. He is also a member of the Malone Center for Engineering in Healthcare and the Data Science and AI Institute.

Oberst received his PhD from the Massachusetts Institute of Technology. Prior to joining Johns Hopkins, he was a postdoctoral associate in the Machine Learning Department at Carnegie Mellon University.

Tell us a little bit about your research.

My group develops technical methods for validating, monitoring, and improving artificial intelligence and machine learning (AI/ML) systems in health care. Health care is a challenging problem domain where nearly every topic in “trustworthy AI” makes an appearance. The stakes are high—and so are our expectations for AI/ML systems. We don’t just want models that are accurate on historical datasets; we want systems that effectively augment human decision-makers, that continue to perform well across different patient populations and clinical settings, and that ultimately make a positive impact on patient outcomes. My research blends elements of causal inference, statistics, and machine learning with the goal of developing AI/ML systems that operate with a more robust understanding of causal relationships and a better sense of how they impact the world around them.

Tell us about a project you are excited about.

Imagine deploying an ML-based system that alerts doctors to a patient’s deterioration, or one that assists doctors in diagnosing diseases. Ultimately, the best measure of whether the system “works” is if it causes an improvement in patient outcomes. However, measuring this kind of impact is very different from measuring accuracy because we need to quantify what would have happened if we hadn’t deployed the system at all. The gold-standard approach to measuring this impact is a randomized trial, just like how we evaluate new medications.

However, unlike drugs, we want to continuously update and improve AI/ML tools. These updates create a challenge: Running a randomized trial takes time, and by the time the trial concludes, the conclusions drawn from the trial may not apply to the latest AI/ML models. My group has been working on methods to resolve this challenge; recently, we published a paper on using the results from previous trials of AI/ML tools to estimate the impact of new models or modifications to existing models.

Why this? What drives your passion for your field?

I’m motivated by intellectual puzzles and practical impact—and AI/ML in health care offers plenty of both! In the work I mentioned above, there is an interesting tension: In some sense, without actually deploying a new model, we simply don’t know what would happen. On the other hand, data collected from deploying a similar model should intuitively tell us something useful. I enjoy working with students and clinical collaborators to figure out the right middle ground to develop statistical methods that yield useful results, while still being explicit and transparent about assumptions. I’m also particularly passionate about health care and health policy; prior to my PhD, I worked as the head of data science at a small health care startup, which opened my eyes to some of the challenges I’m still working on today. It also probably helps that most of my family works in health care—my mother is a doctor, my brother is a nurse, and my wife is a dietitian.

What classes are you teaching?

I teach Machine Learning, as well as a new course I created this year called Machine Learning in Healthcare, which is an advanced topics course that focuses on the methodological and practical challenges that arise when developing and deploying ML in health care. We cover a mix of advanced ML topics—such as causal inference, uncertainty quantification, and human-AI decision-making—and dive into understanding health care data—where it comes from, why it’s often imperfect, and the thorny data science problems that arise when trying to make use of it, like de-identification and privacy, missing data, and more.

Why are you excited to be joining the Johns Hopkins Department of Computer Science?

In my opinion, Johns Hopkins is one of the best places to work on AI/ML in health care, and I feel incredibly lucky to be here! The university’s Schools of Medicine and Public Health are among the best in the world, and I’ve already had the chance to start collaborating with researchers at both institutions. Beyond that, I’ve been struck by the deep connections that already exist between Computer Science and Medicine at Hopkins—the most visible sign being the surgical robots in Hackerman Hall. There is also a lot of support and structure at Hopkins for pursuing the type of research that I do—from the new Data Science and AI institute to the Malone Center for Engineering in Healthcare, both of which bring together researchers to solve a variety of interesting interdisciplinary problems.

Besides your work, what are some of your other hobbies and passions?

I’m a huge nerd and love playing Dungeons & Dragons and a slew of different board games. On the side, I build little software tools for personal use, which is a useful outlet for procrastination: One of my favorite side projects is working on my terminal-based reference manager for papers—admittedly, that’s a little work-adjacent! I also enjoy rock climbing and the outdoors in general.