Details:

WHERE: Hackerman B-17, unless otherwise noted
WHEN: 10:30 a.m. refreshments available, seminar runs from 10:45 a.m. to 12 p.m., unless otherwise noted

Recordings will be available online after each seminar.

 

Schedule of Speakers

Click to expand for talk title, abstract, and speaker biography.

Please note this seminar will take place at 12:00 p.m., with refreshments to follow.

Zoom link >>

Computer Science Seminar Series

Decision-Making with Internet-Scale Knowledge

Abstract: Machine learning models pre-trained on internet data have acquired broad knowledge about the world, but struggle to solve complex tasks that require extended reasoning and planning. Sequential decision-making, on the other hand, has empowered AlphaGo’s superhuman performance, but lacks visual, language, and physical knowledge about the world. In this talk, Sherry Yang will present her research towards enabling decision making with internet-scale knowledge. First, she will illustrate how language models and video generation are unified interfaces that can integrate internet knowledge and represent diverse tasks, enabling the creation of a generative simulator to support real-world decision-making. Second, she will discuss her work on designing decision-making algorithms that can take advantage of generative language and video models as agents and environments. Combining pre-trained models with decision-making algorithms can effectively enable a wide range of applications such as developing chatbots, learning robot policies, and discovering novel materials.

Speaker Biography: Sherry Yang is a final-year PhD student at the University of California, Berkeley, advised by Pieter Abbeel; she is also a senior research scientist at Google DeepMind. Her research aims to develop machine learning models with internet-scale knowledge to make better-than-human decisions. To this end, she has developed techniques for generative modeling and representation learning from large-scale vision, language, and structured data, coupled with developing algorithms for sequential decision-making such as imitation learning, planning, and reinforcement learning. Yang initiated and led the Foundation Models for Decision Making workshop at the 2022 and 2023 Conferences on Neural Information Processing Systems, bringing together research communities in vision, language, planning, and reinforcement learning to solve complex decision-making tasks at scale. Before her current role, Yang received her bachelor’s and master’s degrees from the Massachusetts Institute of Technology, where she was advised by Patrick Winston and Julian Shun.

Zoom link >>

Computer Science Seminar Series

Stochastic Computer Graphics

Abstract: Computer graphics research has long been dominated by the interests of large film, television, and social media companies, forcing other, more safety-critical applications (e.g., medicine, engineering, security) to repurpose graphics algorithms originally designed for entertainment. In this talk, Silvia Sellán will advocate for a perspective shift in this field that allows researchers to design algorithms directly for these safety-critical application realms. She will show that this begins by reinterpreting traditional graphics tasks (e.g., 3D modeling and reconstruction) from a statistical lens and quantifying the uncertainty in algorithmic outputs, as exemplified by the research she has conducted for the past five years. She will end by mentioning several ongoing and future research directions that carry this statistical lens to entirely new problems in graphics and vision and into specific applications.

Speaker Biography: Sellán is a fifth-year computer science PhD student at the University of Toronto, working in computer graphics and geometry processing. She is a Vanier Doctoral Scholar, an Adobe Research Fellow, and the winner of the 2021 University of Toronto Arts & Science Dean’s Doctoral Excellence Scholarship. She has interned twice at Adobe Research and twice at the Fields Institute of Mathematics. She is also a founder and organizer of the Toronto Geometry Colloquium and a member of the ACM Community Group for Women in Computer Graphics Research.

Please note this seminar will take place at 12:00 p.m., with refreshments to follow.

Zoom link >>

Computer Science Seminar Series

Improving, Evaluating, and Detecting Long-Form LLM-Generated Text

Abstract: Recent advances in large language models have enabled them to process texts exceeding 100,000 tokens in length, fueling demand for long-form language processing tasks such as the summarization or translation of books. However, LLMs struggle to take full advantage of the information within such long contexts, which contributes to factually incorrect and incoherent text generation. In this talk, Mohit Iyyer will first demonstrate an issue that plagues even modern LLMs: their tendency to assign high probability to implausible long-form continuations of their input. He will then describe a contrastive sequence-level ranking model that mitigates this problem at decoding time and that can also be adapted to the reinforcement learning from human feedback alignment paradigm. Next, he will consider the growing problem of long-form evaluation: As the length of the inputs and outputs of long-form tasks grows, how do we even measure progress (via both humans and machines)? He proposes a high-level framework that first decomposes a long-form text into simpler atomic units before then evaluating each unit on a specific aspect. He demonstrates the framework’s effectiveness at evaluating factuality and coherence on tasks such as biography generation and book summarization. He will also discuss the rapid proliferation of LLM-generated long-form text, which plagues not only evaluation (e.g., via Mechanical Turkers using ChatGPT to complete tasks) but also society as a whole, and he will describe novel watermarking strategies to detect such text. Finally, he will conclude by discussing his future research vision, which aims to extend long-form language processing to multilingual, multimodal, and collaborative human-centered settings.

Speaker Biography: Mohit Iyyer is an associate professor in computer science at the University of Massachusetts Amherst, with a primary research interest in natural language generation. He is the recipient of Best Paper Awards at the 2016 and 2018 Annual Conferences of the North American Chapter of the Association for Computational Linguistics, an Outstanding Paper Award at the 2023 Conference of the European Chapter of the Association for Computational Linguistics, and a Best Demo Award at the 2015 Conference on Neural Information Processing Systems; he also received the 2022 Samsung AI Researcher of the Year award. Iyyer obtained his PhD in computer science from the University of Maryland, College Park in 2017 and spent the following year as a researcher at the Allen Institute for AI.

To be announced.

To be announced.

To be announced.

To be announced.

To be announced.

Past Speakers

Click to expand for recording, date, abstract, and speaker biography.

View the recording >>

Computer Science Seminar Series

February 22, 2024

Abstract: Today, access to high-quality data has become the key bottleneck to deploying machine learning. Often, the data that is most valuable is locked away in inaccessible silos due to unfavorable incentives and ethical or legal restrictions. This is starkly evident in health care, where such barriers have led to highly biased and underperforming tools. In his talk, Sai Praneeth Karimireddy will describe how collaborative systems, such as federated learning, provide a natural solution; they can remove barriers to data sharing by respecting the privacy and interests of the data providers. Yet for these systems to truly succeed, three fundamental challenges must be confronted: These systems need to 1) be efficient and scale to large networks, 2) provide reliable and trustworthy training and predictions, and 3) manage the divergent goals and interests of the participants. Karimireddy will discuss how tools from optimization, statistics, and economics can be leveraged to address these challenges.

Speaker Biography: Sai Praneeth Karimireddy is a postdoctoral researcher at the University of California, Berkeley with Mike I. Jordan. Karimireddy obtained his undergraduate degree from the Indian Institute of Technology Delhi and his PhD at the Swiss Federal Institute of Technology Lausanne (EPFL) with Martin Jaggi. His research builds large-scale machine learning systems for equitable and collaborative intelligence and designs novel algorithms that can robustly and privately learn over distributed data (i.e., edge, federated, and decentralized learning). He also closely engages with industry and public health organizations (e.g., Doctors Without Borders, the Red Crossthe Cancer Registry of Norway) to translate his research into practice. His work has previously been deployed across industry by Meta, Google, OpenAI, and Owkin and has been awarded with the EPFL Patrick Denantes Memorial Prize for the best computer science thesis, the Dimitris N. Chorafas Foundation Award for exceptional applied research, an EPFL thesis distinction award, a Swiss National Science Foundation fellowship, and best paper awards at the International Workshop on Federated Learning for User Privacy and Data Confidentiality at the 2021 International Conference on Machine Learning and the International Workshop on Federated Learning: Recent Advances and New Challenges at the Thirty-Sixth Annual Conference on Neural Information Processing Systems.

View the recording >>

Institute of Assured Autonomy & Computer Science Seminar Series

February 20, 2024

Abstract: Sensing and actuation systems are entrusted with increasing intelligence to perceive and react to the environment, but their reliability often relies on the trustworthiness of sensors. As process automation and robotics keep evolving, sensing methods such as pressure, temperature, and motion sensing are extensively used in conventional systems and rapidly emerging applications. This talk aims to investigate the threats incurred by the out-of-band signals and discuss the low-cost defense methods against physical injection attacks on sensors. Hei will present her paper results from the USENIX Security Symposium, ACM Conference on Computer and Communications Security (CCS), ACM Asia CSS, Secure and Trustworthy Deep Learning Systems Workshop, Joint Workshop on CPS & IoT Security and Privacy, and European Alliance for Innovation’s International Conference on Security and Privacy in Cyber-Physical Systems and Smart Vehicles.

Speaker Biography: Xiali “Sharon” Hei has been an Alfred and Helen M. Lamson Endowed Associate Professor in the School of Computing and Informatics at the University of Louisiana at Lafayette since August, 2023. She was previously an Alfred and Helen M. Lamson Endowed Assistant Professor from August 2017 to July 2023. Prior to joining the University of Louisiana at Lafayette, she was an assistant professor at Delaware State University from 2015–2017 and an assistant professor at Frostburg State University from 2014–2015. Hei has received a number of awards, including an Alfred and Helen M. Lamson Endowed Professorship; an Outstanding Achievement Award in Externally Funded Research; numerous recognitions from the NSF, including a Track 4 Faculty Fellowship, a Secure and Trustworthy Cyberspace award, a Major Research Instrumentation award, an Established Program to Stimulate Competitive Research RII Track 1 award, a Computer and Information Science and Engineering Research Initiation Initiative award; a Meta research award; funding from the Lousiana Board of Regents Support Fund; a Delaware Economic Development Office grant; a Best Paper Award at the European Alliance for Innovation’s International Conference on Security and Privacy in Cyber-Physical Systems and Smart Vehicles; a Best Poster Runner-Up Award at the 2014 ACM International Symposium on Mobile Ad Hoc Networking and Computing; a Dissertation Completion Fellowship; the Bronze Award Best Graduate Project in Future of Computing Competition, and more. Her papers have been published at venues such as the USENIX Security Symposium, the ACM Conference on Computer and Communications Security, the Institute of Electrical and Electronics Engineers (IEEE) International Conference on Computer Communications (ICC), the IEEE European Symposium on Security and Privacy (EuroS&P), the International Symposium on Research in Attacks, Intrusions and Defenses, and the ACM Asia Conference on Computer and Communications Security. Hei is a TPC member of the USENIX Security Symposium, IEEE EuroS&P, PST, the IEEE Global Communications Conference, SafeThings, AutoSec, IEEE ICC, the International Conference on Wireless Artificial Intelligent Computing Systems and Applications, and more. She has also been an IEEE senior member since 2019. Hei earned a BS in electrical engineering from Xi’an Jiaotong University and an MS in software engineering from Tsinghua University.

View the recording >>

Computer Science Seminar Series

February 15, 2024

Abstract: Replicability is vital to ensuring scientific conclusions are reliable, but failures of replicability have been a major issue in nearly all scientific areas of study; machine learning is no exception. While failures of replicability in machine learning are multifactorial, one obstacle to replication efforts is the ambiguity in whether or not a replication effort was successful when many good models exist for a task. In this talk, we will discuss a new formalization of replicability for batch and reinforcement learning algorithms and demonstrate how to solve fundamental tasks in learning under the constraints of replicability. We will also discuss how replicability relates to other algorithmic desiderata in responsible computing, such as differential privacy.

Speaker Biography: Jessica Sorrell is a postdoctoral researcher at the University of Pennsylvania, where she works with Aaron Roth and Michael Kearns. She completed her PhD at the University of California San Diego, advised by Russell Impagliazzo and Daniele Micciancio. She is broadly interested in the theoretical foundations of responsible computing and her work spans a variety of pressing issues in machine learning, such as replicability, privacy, and fairness.

View the recording >>

Computer Science Seminar Series

February 12, 2024

Abstract: There is an enormous data gap between how AI systems and children learn language: The best LLMs now learn language from text with a word count in the trillions, whereas it would take a child roughly 100K years to reach those numbers through speech. There is also a clear generalization gap: Whereas machines struggle with systematic generalization, people excel. For instance, once a child learns how to “skip,” they immediately know how to “skip twice” or “skip around the room with their hands up” due to their compositional skills. In this talk, Brenden Lake will describe two case studies in addressing these gaps. The first addresses the data gap, in which deep neural networks were trained from scratch, not on large-scale data from the web, but through the eyes and ears of a single child. Using head-mounted video recordings from a child, this study shows how deep neural networks can acquire many word-referent mappings, generalize to novel visual referents, and achieve multi-modal alignment. The results demonstrate how today’s AI models are capable of learning key aspects of children’s early knowledge from realistic input. The second case study addresses the generalization gap. Can neural networks capture human-like systematic generalization? This study addresses a 35-year-old debate catalyzed by Fodor and Pylyshyn’s classic article, which argued that standard neural networks are not viable models of the mind because they lack systematic compositionality—the algebraic ability to understand and produce novel combinations from known components. This study shows how neural networks can achieve humanlike systematic generalization when trained through meta-learning for compositionality (MLC), a new method for optimizing the compositional skills of neural networks through practice. With MLC, a neural network can match human performance and solve several machine learning benchmarks. Given this work, we’ll discuss the paths forward for building machines that learn, generalize, and interact in more humanlike ways based on more natural input.

Speaker Biography: Brenden M. Lake is an assistant professor of psychology and data science at New York University. He received his MS and BS in symbolic systems from Stanford University in 2009 and his PhD in cognitive science from the Massachusetts Institute of Technology in 2014. Lake was a postdoctoral data science fellow at NYU from 2014–2017. He is a recipient of the Robert J. Glushko Prize for Outstanding Doctoral Dissertation in Cognitive Science, he was named an Innovator Under 35 by MIT Technology Review, and his research was selected by Scientific American as one of the 10 most important advances of 2016. Lake’s research focuses on computational problems that are easier for people than they are for machines, such as learning new concepts, creating new concepts, learning to learn, and asking questions.

View the recording >>

Computer Science Seminar Series

February 6, 2024

Abstract: In the era of big data, the significant growth in graph size renders numerous traditional algorithms, including those with polynomial or even linear time complexity, inefficient. Therefore, we need novel approaches for efficiently processing massive graphs. In this talk, Zihan Tan will discuss two modern approaches towards this goal: structure exploitation and graph compression. He will first show how to utilize graph structure to design better approximation algorithms, showcasing his work on the Graph Crossing Number problem. He will then show how to compress massive graphs into smaller ones while preserving their flow/cut/distance structures, thereby obtaining faster algorithms.

Speaker Biography: Zihan Tan is a postdoctoral associate at DIMACS, Rutgers University. Before joining DIMACS, he obtained his PhD from the University of Chicago, where he was advised by Julia Chuzhoy. He is broadly interested in theoretical computer science, with a focus on graph algorithms and graph theory.

View the recording >>

Computer Science Seminar Series

February 1, 2024

Abstract: This talk will discuss the area of algorithms with predictions, also known as learning-augmented algorithms. These methods parameterize algorithms with machine-learned predictions, enabling the algorithms to tailor their decisions to input distributions and to allow for improved performance on runtime, space, or solution quality. This talk will discuss recent developments on how to leverage machine-learned predictions to improve the runtime efficiency of algorithms for optimization and data structures. The talk will also discuss how to achieve “instance-optimal” algorithms when the predictions are accurate and the algorithm’s performance gracefully degrades when there are errors in the predicted advice. The talk will illustrate via examples such as bipartite matching the potential of the area to realize significant performance improvements for algorithm efficiency.

Speaker Biography: Ben Moseley is the Carnegie-Bosch Associate Professor of Operations Research at Carnegie Mellon University and is a consulting scientist at Relational AI. He obtained his PhD from the University of Illinois. During his career, his papers have won best paper awards at IPDPS (2015), SPAA (2013), and SODA (2010). His papers have been recognized as top publications with honors such as Oral Presentations at NeurIPS (2021, 2017) and NeurIPS Spotlight Papers (2023, 2018). He has served as area chair for ICML, ICLR, and NeurIPS every year since 2020 and has been on many program committees, including SODA (2022, 2018), ESA (2017), and SPAA (2024, 2022, 2021, 2016). He was an associate editor for IEEE Transactions on Knowledge and Data Engineering from 2018–2022 and has served as associate editor of Operations Research Letters since 2017. He has won an NSF CAREER Award, two Google Research Faculty Awards, a Yahoo ACE Award, and an Infor faculty award. He was selected as a Top 50 Undergraduate Professor by Poets & Quants. His research interests broadly include algorithms, machine learning, and discrete optimization. He is currently excited about robustly incorporating machine learning into decision-making processes.

View the recording >> (Passcode: !hC5Xn8T)

Computer Science Seminar Series

January 25, 2024

Abstract: The alignment problem in AI is currently framed in a variety of ways: It is the challenge of building AI systems that do as their designers intend, or as their users prefer, or as would benefit society. In this talk Gillian Hadfield connects the AI alignment problem to the far more general problem of how humans organize cooperation societies. From the perspective of an economist and legal scholar, alignment is the problem of how to organize society to maximize human well-being—however that is defined. Hadfield will argue that “solving” the AI alignment problem is better thought of as the problem of how to integrate AI systems, especially agentic systems, into our human normative systems. She will present results from collaborations with computer scientists that begin the study of how to build normatively competent AI systems—AI that can read and participate in human normative systems—and normative infrastructure that can support AI’s normative competence.

Computer Science Seminar Series

January 23, 2024

Abstract: The biochemical functions of proteins, such as catalyzing a chemical reaction or binding to a virus, are typically conferred by the geometry of only a handful of atoms. This arrangement of atoms, known as a motif, is structurally supported by the rest of the protein, referred to as a scaffold. A central task in protein design is to identify a diverse set of stabilizing scaffolds to support a motif known or theorized to confer function. This long-standing challenge is known as the motif-scaffolding problem. In this talk, Brian Trippe describes a statistical approach he has developed to address the motif-scaffolding problem. His approach involves (1) estimating a distribution supported on realizable protein structures and (2) sampling scaffolds from this distribution conditioned on a motif. For the first step, he adapts diffusion generative models to fit example protein structures from nature. For the second step, he develops sequential Monte Carlo algorithms to sample from the conditional distributions of these models. He finally describes how, with experimental and computational collaborators, he has generalized and scaled this approach to generate and experimentally validate hundreds of proteins with various functional specifications.

Speaker Biography: Brian Trippe is a postdoctoral fellow at Columbia University in the Department of Statistics and a visiting researcher at the Institute for Protein Design at the University of Washington. He completed his PhD in computational and systems biology at the Massachusetts Institute of Technology, where he worked on Bayesian methods for inference in high-dimensional linear models. In his research, Trippe develops statistical machine learning methods to address challenges in biotechnology and medicine, with a focus on generative modeling and inference algorithms for protein engineering.

View the recording >>

Computer Science Seminar Series

January 18, 2024

Abstract: Mobile (cellular) networks traditionally have been closed systems, developed as vertically integrated, black-box appliances by a few equipment vendors and deployed by a handful of national-scale mobile network operators in each country—all in all, a small ecosystem. However, we have witnessed a radical transformation in the design and deployment of mobile networking systems in the recent past that reflects a path toward greater openness. In this talk, Marina will give his perspective on the key drivers, economic and beyond, behind this trend and the main enablers for this transformation. He will complement this by outlining his key research contributions in this direction. Further, he will highlight two of his recent works: (1) on rearchitecting the mobile core control plane for efficient cloud-native operation and to be more open (i.e., better suited for multi-vendor realization); and (2) on radio access network root cause analysis as a key challenge for Open RAN, as well as a compelling use case of the AI-powered and data-driven operations it enables.

Speaker Biography: Mahesh Marina is a professor in the School of Informatics at the University of Edinburgh, where he leads the Networked Systems Research Group. He is currently spending his sabbatical time at the Johns Hopkins University’s Department of Computer Science as a visiting professor. Previously, Marina was a Turing Fellow at the Alan Turing Institute, the UK’s national institute for data science and AI, for five years, 2018–2023; he also served as the director of the Institute for Computing Systems Architecture within Informatics@Edinburgh for four years, until July 2022. Prior to joining the University of Edinburgh, Marina had a two-year postdoctoral stint at the UCLA Computer Science Department after earning his PhD in computer science from the State University of New York at Stony Brook. He has previously held visiting researcher positions at ETH Zurich and at Ofcom, the UK’s telecommunications regulator, at its headquarters in London. Marina is an ACM Distinguished Member and an IEEE Senior Member.

View the recording >>

Institute of Assured Autonomy & Computer Science Seminar Series

January 16, 2024

Abstract: In early 2020, the U.S. government revealed its belief that China might be able to eavesdrop on 5G communications through Huawei network equipment. This has enormous ramifications for DOD and State Department communications overseas, since these backdoors could provide our adversaries with information that allows them to glean insights into operations or harm personnel. Later that same year, wired and wireless networks in the greater Nashville area failed when a bomb damaged a single network facility. The outage affected nearly every aspect of modern society, including grounding flights, disrupting economic activity, and disconnecting 911. These two events highlight the enormous challenge of securing critical communications: We need to secure our communications against threats within the telecommunications infrastructure and secure it from external attack. This talk will discuss both of these challenges. First, Marder will use the Nashville outage as a blueprint to show that it remains surprisingly easy for attackers to induce large-scale communications outages around the U.S. without any insider information or specialized access. Second, he will discuss innovative methods for identifying and circumventing the potential threats placed by nation-state adversaries within the infrastructure, along with methods for ensuring that communications only traverse benign infrastructure.

Speaker Biography: Alex Marder is an assistant professor of computer science at Johns Hopkins University and a member of the Institute for Assured Autonomy. Marder’s research covers a wide breadth of networking areas, including the use of empirical analyses and machine learning to evaluate and improve the security and performance of wired and wireless networks. His current work leverages a deep understanding of network architecture and deployment to design secure 5G communication networks for the Department of Defense, reveal security weaknesses in domestic internet access networks, and provide a better understanding of broadband inequity. He received a BS from Brandeis University and a PhD from the University of Pennsylvania. Prior to joining Johns Hopkins, he was a research scientist at CAIDA at UC San Diego.

View the recording >>

Computer Science Speaker Series

December 5, 2023

Abstract: In an interconnected world, effective policymaking increasingly relies on understanding large-scale human networks. However, there are many challenges to understanding networks and how they impact decision-making, including (1) how to infer human networks, which are typically unobserved, from data; (2) how to model complex processes, such as disease spread, over networks and inform decision-making; and (3) how to estimate the impacts of decisions, in turn, on human networks. In this talk, I’ll discuss how I’ve addressed each of these challenges in my research. I’ll focus mainly on COVID-19 pandemic response as a concrete application, where we’ve developed new methods for network inference and epidemiological modeling, and have deployed decision-support tools for policymakers. I’ll also touch on other network-driven challenges, including political polarization and supply chain resilience.

View the recording >>

Institute of Assured Autonomy & Computer Science Seminar Series

November 14, 2023

Abstract: Many deep issues plaguing today’s financial markets are symptoms of a fundamental problem: The complexity of algorithms underlying modern finance has significantly outpaced the power of traditional tools used to design and regulate them. At Imandra, we have pioneered the application of formal verification to financial markets, where firms like Goldman Sachs, Itiviti, and OneChronos already rely upon Imandra’s algorithm governance tools for the design, regulation, and calibration of many of their most complex algorithms. With a focus on financial infrastructure (e.g., the matching logics of national exchanges and dark pools), we will describe the landscape and illustrate our Imandra system on a number of real-world examples. We’ll sketch many open problems and future directions along the way.

Speaker Biography: Grant Passmore is the co-founder and co-CEO of Imandra Inc. Passmore is a widely published researcher in formal verification and symbolic Al and has more than fifteen years of industrial formal verification experience. He has been a key contributor to the safety verification of algorithms at Cambridge, Carnegie Mellon, Edinburgh, Microsoft Research, and SRI. He earned his PhD on automated theorem proving in algebraic geometry from the University of Edinburgh, is a graduate of UT Austin (BA in mathematics) and the Mathematical Research Institute in the Netherlands (master class in mathematical logic), and is a life member of Clare Hall, University of Cambridge.

View the recording >>

Computer Science Seminar Series

October 19, 2023

Abstract: The security and architecture communities will remember the past five years as the era of side channels. Starting from Spectre and Meltdown, time and again we have seen how basic performance-improving features can be exploited to violate fundamental security guarantees. Making things worse, the rise of side channels points to a much larger problem, namely the presence of large gaps in the hardware-software execution contract on modern hardware. In this talk, I will give an overview of this gap, in terms of both security and performance. First, I will give a high-level survey on speculative execution attacks such as Spectre and Meltdown. I will then talk about how speculative attacks are still a threat to both kernel and browser isolation primitives, highlighting new issues on emerging architectures. Next, from the performance perspective, I will discuss new techniques for microarchitectural code optimizations, with an emphasis on cryptographic protocols and other compute-heavy workloads. Here I will show how seemingly simple, functionally equivalent code modifications can lead to significant changes in the underlying microarchitectural behavior, resulting in dramatic performance improvements. The talk will be interactive and include attack demonstrations.

Speaker Biography: Daniel Genkin is an Alan and Anne Taetle Early Career Associate Professor at the School of Cybersecurity and Privacy at Georgia Tech. Daniel’s research interests are in hardware and system security, with particular focus on side channel attacks and defenses. Daniel’s work has won the Distinguished Paper Award at IEEE Security and Privacy, an IEEE Micro Top Pick, and the Black Hat Pwnie Awards, as well as top-3 paper awards in multiple conferences. Most recently, Daniel has been part of the team performing the first analysis of speculative and transient execution, resulting in the discovery of Spectre, Meltdown, and follow-ups. Daniel has a PhD in computer science from the Technion Israel’s Institute of Technology and was a postdoctoral fellow at the University of Pennsylvania and the University of Maryland.

View the recording >>

Institute of Assured Autonomy & Computer Science Seminar Series

October 17, 2023

Abstract: How do we make machine learning as rigorously tested and reliable as any medication or diagnostic test? ML has the potential to improve decision-making in health care, from predicting treatment effectiveness to diagnosing disease. However, standard retrospective evaluations can give a misleading sense for how well models will perform in practice. Evaluation of ML-derived treatment policies can be biased when using observational data, and predictive models that perform well in one hospital may perform poorly in another. In this talk, I will introduce new tools to proactively assess and improve the reliability of machine learning in healthcare. A central theme will be the application of external knowledge, including review of patient records, incorporation of limited clinical trial data, and interpretable stress tests. Throughout, I will discuss how evaluation can directly inform model design.

Speaker Biography: Michael Oberst is an incoming assistant professor of computer science at Johns Hopkins and is currently a postdoc in the Machine Learning Department at Carnegie Mellon University. His research focuses on making sure that machine learning in health care is safe and effective, using tools from causal inference and statistics. His work has been published at a range of machine learning venues (NeurIPS, ICML, AISTATS, KDD), including work with clinical collaborators from Mass General Brigham, NYU Langone, and Beth Israel Deaconess Medical Center. He has also worked on clinical applications of machine learning, including work on learning effective antibiotic treatment policies (published in Science Translational Medicine). He earned his undergraduate degree in Statistics at Harvard and his PhD in computer science at MIT.