Details:
WHERE: B-17 Hackerman Hall, unless otherwise noted
WHEN: 10:30 a.m. refreshments available, seminar runs from 10:45 a.m. to 12 p.m., unless otherwise noted
Recordings will be available online after each seminar.
Schedule of Speakers
Click to expand for talk title, abstract, and speaker biography.
Please note this seminar will take place in 228 Malone Hall.
Computer Science and Biomedical Engineering Seminar Series
“Proteomics at Genome Scale: Connecting Molecules to Systems with Biological Foundation Models”
Abstract: Understanding the mechanisms linking genetic sequence to cellular function remains a central challenge in biology. Existing computational approaches are often computationally expensive or fail to generalize beyond well-studied model organisms and protein families. Samuel Sledzieski’s work leverages protein language models to connect molecular structure to systems biology at genome scale. He will highlight three applications where protein language modeling unlocks new capabilities: training on molecular dynamics simulations to predict protein conformational dynamics, enabling an analysis of the allosteric behavior of KRAS; building de novo protein-protein interaction networks in non-model organisms, revealing previously uncharacterized proteins involved in coral stress response; and high-throughput screening of massive small molecule libraries for drug-target interactions, identifying novel kinase inhibitors with experimentally validated nanomolar affinity. Sledzieski will discuss how recent advances in contrastive learning, parameter-efficient fine-tuning, and multimodal representation learning address key computational barriers to effective genome-scale modeling. Finally, he proposes a research plan to model heterogeneity in protein structure and molecular interactions across cellular contexts.
Speaker Biography: Samuel Sledzieski is a Flatiron Research Fellow at the Flatiron Institute Center for Computational Biology and a visiting researcher at the Lewis-Sigler Institute for Integrative Genomics at Princeton University. His research uses protein language models to integrate molecular biophysics with systems genomics, with the ultimate goal of mapping the mechanisms of cellular behavior and complex disease. He has developed and released several open-source machine learning models, including D-SCRIPT, ConPLex, and RocketSHP, and has held research positions at Microsoft Research, Cellarity, Serinus Biosciences, and the Centre Scientifique de Monaco. A recipient of the NSF Graduate Research Fellowship, Sledzieski received his PhD (2024) and MS (2021) in computer science from the Massachusetts Institute of Technology, and his BS (2019) from the University of Connecticut.
Past Speakers
Click to expand for recording, date, abstract, and speaker biography.
Computer Science Seminar Series
January 29, 2026
Abstract: Machine learning and AI are not standalone artifacts: They are ecosystems where foundation models are adapted and deployed through layered pipelines spanning developers, platforms, users, and regulators. This talk explores how the structure of these ecosystems shapes the distribution of value and risk, and determines system-level properties like safety and fairness. Benjamin Laufer begins with a game-theoretic model of the interaction between general-purpose producers and domain specialists, using it to examine how regulatory design shapes incentives and equilibrium behaviors. He then connects these formal insights to empirical measurements from 1.86 million open-source AI models, reconstructing lineage networks to quantify how behaviors and failures propagate through fine-tuning. Finally, turning from the descriptive structure of the ecosystem to the design of the algorithms themselves, Laufer describes his work in algorithmic fairness, framing the identification of less discriminatory algorithms as a search problem with provable statistical guarantees. He closes by outlining a forward-looking research agenda aimed at building technical infrastructure and policy mechanisms for steering AI ecosystems toward robust, accountable, and democratic outcomes.
Speaker Biography: Benjamin Laufer is a PhD candidate at Cornell Tech, advised by Jon Kleinberg and Helen Nissenbaum. A recipient of a LinkedIn PhD Fellowship and three Rising Stars awards, Laufer researches how data-driven and AI technologies behave and interact with society. He previously worked as a research intern at Microsoft Research and a data scientist at Lime, and holds a BSE in operations research and financial engineering from Princeton University.
Computer Science Seminar Series
January 20, 2026
Abstract: Despite great potential, there is a growing gap between what AI systems promise and what they deliver, with real human costs. AI auditing is the practice of independently evaluating deployed AI systems to determine how they behave, what risks they pose, and whether they meet their intended objectives. This interdisciplinary endeavor requires both a technical expansion of our current AI evaluation paradigm and a framework for ensuring that audit investigations are sufficiently material for downstream legal actions and normative debates. At the intersection of law and public policy, applied economics and computer science, we can advance AI auditing policy and practice in material ways—by anchoring notions of engineering responsibility in AI development, expanding our vocabulary of AI evaluation methods, and pushing to connect AI audit outcomes to organizational and legal consequences. Through case studies of AI use in health care and government, we demonstrate how novel evaluation methods such as incident reporting, workflow simulations, and pilot experiments can supplement standard practices like data benchmarking to more adequately inform AI governance, shaping a range of outcomes from documentation and procurement to regulatory enforcement and product safety compliance. As auditing makes its way into key policy proposals as a primary mechanism for AI accountability, we must think critically about the necessary technical and institutional infrastructure required for this form of oversight to successfully enable safe widespread AI adoption.
Speaker Biography: Inioluwa Deborah Raji is a researcher at the University of California, Berkeley who is interested in algorithmic auditing. She has worked closely with industry, civil society, and within academia to push forward various projects to operationalize ethical considerations in machine learning practice and push forward benchmarking and model evaluation norms in the field. In particular, Raji aims to study how model engineering choices (from evaluation to data choices) impact consumer protection, product liability, procurement, anti-discrimination practice, and other forms of legal and institutional accountability related to functional harms. She is on the advisory boards for the Center for Democracy and Technology AI Governance Lab, the Health AI Partnership, TeachAI, REAL ML, and the Leadership Conference on Civil and Human Rights Center for Civil Rights and Technology. For her efforts, Raji has been named to Forbes’ 30 Under 30, MIT Technology Review’s Innovators Under 35, and TIME‘s 100 Most Influential People in AI lists. She is also the recipient of the 2024 Tech For Humanity Prize and the 2024 Mozilla Rise25 award, and is a co-recipient of the Electronic Frontier Foundation Pioneer Award along with Joy Buolamwini and Timnit Gebru. Raji received her bachelor’s of applied science in engineering science from the University of Toronto. She is currently completing her PhD in computer science at UC Berkeley.
Computer Science Seminar Series
January 20, 2026
Abstract: People with disabilities are marginalized by inaccessible social infrastructure and technology, facing various challenges in all aspects of their life. Conventional assistive technologies commonly provide generic solutions to a certain disability population and do not consider users’ individual and context differences, leading to high abandonment rate. Yuhang Zhao’s research seeks to thoroughly understand the experiences and needs of people with disabilities and create intelligent assistive technologies adaptive to user contexts, including their abilities, environments, and intents, providing effective, unobtrusive support tailored to user needs. In this talk, Zhao will discuss how she leverages state-of-the-art artificial intelligence, augmented reality, and eye-tracking technologies to design and develop context-aware assistive technologies. She will divide user context into external factors (e.g., surrounding environments) and internal factors (e.g., intents, abilities) and present her work on scene-aware, intent-aware, and ability-aware systems, respectively. Specifically, she will discuss: (1) CookAR, a wearable scene-aware AR system that distinguishes and augments the affordance of kitchen tools (e.g., knife blade vs. knife handle) for low-vision users to facilitate safe and efficient interactions; (2) GazePrompt, an eye-tracking-based, intent-aware system that supports low-vision users in reading; and (3) FocusView, a customizable video interface that allows users with ADHD to tailor video presentations to their sensory abilities. Zhao will conclude her talk by highlighting future research directions toward AI-powered context-aware systems for people with disabilities.
Speaker Biography: Yuhang Zhao is an assistant professor in the Department of Computer Sciences at the University of Wisconsin–Madison. Her research interests lie in human-computer interaction (HCI), accessibility, augmented/virtual reality, and AI-powered systems. Zhao leads the madAbility Lab at UW–Madison to design and build intelligent interactive systems to enhance human abilities. She has frequently published at top-tier conferences and journals in the field of HCI and accessibility (e.g., the ACM Conference on Human Factors in Computing Systems, the ACM Symposium on User Interface Software and Technology, the International ACM Special Interest Group on Accessible Computing Conference on Computers and Accessibility) and has received several U.S. and international patents. Her research has been funded by various agencies, including the NSF, the National Institutes of Health, the National Institute of Standards and Technology, and corporate sponsors such as Meta and Apple. Her work has received multiple Best Paper honorable mention awards and recognitions for contribution to diversity and inclusion and has been covered by various media outlets (e.g., TNW, New Scientist). Beyond paper publications, she disseminates her research outcomes via open-source toolkits and guidelines for broader impact. Zhao received her PhD in information science from Cornell University and her BA and MS in computer science from Tsinghua University.
Computer Science Seminar Series
January 15, 2026
Abstract: In 2025, frontier AI developers started warning that their AI systems were beginning to cross risk thresholds for dangerous cyber, chemical, and biological capabilities. This is unfortunate given how closed-weight AI systems are persistently vulnerable to prompt-injection attacks and open-weight systems are persistently vulnerable to malicious fine-tuning. Reinforcement learning from human feedback and refusal training aren’t enough. This presentation will focus on adversarial attacks that target model internals and their uses for making frontier AI safeguards “run deep.” In particular, we will focus on what technical tools can help us make open-weight AI systems safer. Along the way, we will discuss what AI safety can learn from the design of lightbulbs and why you should keep a close eye on Arkansas Attorney General Tim Griffin in 2026.
Speaker Biography: Stephen “Cas” Casper is a final-year PhD student at the Massachusetts Institute of Technology in the Algorithmic Alignment Group, where is he advised by Dylan Hadfield-Menell. Casper leads a research stream for the MATS Program and mentors for ERA and GovAI. He is also a writer for the International AI Safety Report and the Singapore Consensus on Global AI Safety Research Priorities. Casper’s research focuses on AI safeguards and governance, with features in the Conference on Neural Information Processing Systems; the Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence; Nature; the ACM Conference on Fairness, Accountability, and Transparency; the Conference on Empirical Methods in Natural Language Processing; the Institute of Electrical and Electronics Engineers Conference on Secure and Trustworthy Machine Learning; Transactions on Machine Learning Research; and the annual conference of the International Association for Safe and Ethical AI—as well as in a number of workshops and over 20 press articles and newsletters. Learn more on his Google Scholar page or personal website.
Computer Science Seminar Series
January 15, 2026
Abstract: AI evaluations inform critical decisions, from the valuations of trillion-dollar companies to policies on regulating AI. Yet evaluation methods have failed to keep pace with deployment, creating an evaluation crisis where performance in the lab fails to predict real-world utility. In this talk, Sayash Kapoor will discuss the evaluation crisis in a high-stakes domain: AI-based science. Across dozens of fields, from medicine to political science, Kapoor finds that flawed evaluation practices have led to overoptimistic claims about AI’s accuracy, affecting hundreds of published papers. To address these evaluation failures, he presents a consensus-based checklist that identifies common pitfalls and consolidates best practices for researchers adopting AI, as well as a benchmark to foster the development of AI agents that can verify scientific reproducibility. AI evaluation failures affect many other applications; beyond science, Kapoor examines how AI agent benchmarks miss many failure modes and presents systems to identify these errors. He examines inference scaling, a recent technique to improve AI capabilities, and shows that claims of improvement fail to hold under realistic conditions. Finally, Kapoor discusses how better AI evaluation can inform policymaking, drawing on his work on evaluating the risks of open foundation models and his engagement with state and federal agencies. Why does the evaluation crisis persist? The AI community has poured enormous resources into building evaluations for models, but not into investigating how models impact the world. To address the crisis, we need to build a systematic science of AI evaluation to bridge the gap between benchmark performance and real-world impact.
Speaker Biography: Sayash Kapoor is a computer science PhD candidate and a Porter Ogden Jacobus Fellow at Princeton University, as well as a senior fellow at Mozilla. He is a co-author of AI Snake Oil, one of Nature’s ten best books of 2024. Kapoor’s newsletter is read by over 65,000 AI enthusiasts, researchers, policymakers, and journalists. His work has been published in leading scientific journals such as Science and Nature Human Behaviour, as well as conferences like the Conference on Neural Information Processing Systems and the International Conference on Machine Learning. Kapoor has written for mainstream outlets including The Wall Street Journal and Wired, and his work has been featured by The New York Times, The Atlantic, The Washington Post, Bloomberg News, and many more. He has been recognized with various awards, including a Best Paper Award at the ACM Conference on Fairness, Accountability, and Transparency; an Impact Recognition Award at the ACM Conference on Computer-Supported Cooperative Work and Social Computing; and inclusion in TIME’s inaugural list of the 100 Most Influential People in AI.
Computer Science Seminar Series
January 13, 2026
Abstract: Governing AI is a grand challenge for society. Rishi Bommasani’s research provides the foundations for the scientific field of AI policy: How do we understand the societal impact of AI and how do we use our understanding to produce evidence-based AI policy? Bommasani’s research introduces new paradigms for measuring frontier models, deployed systems, and AI companies. Alongside his research, he will cover his work in multiple jurisdictions to demonstrate how AI research can impact public policy.
Speaker Biography: Rishi Bommasani is a senior research scholar at the Stanford Institute for Human-Centered AI researching the societal and economic impact of AI. His research has received several recognitions at machine learning conferences and has been covered by The New York Times, Nature, Science, The Washington Post, and The Wall Street Journal. Bommasani’s research shapes public policy: He is the lead author of the California Report on Frontier AI Policy that led to the first U.S. laws on frontier AI; he is an independent expert chair of the European Union AI Act General-Purpose Code of Practice, which clarifies the first comprehensive worldwide laws on frontier AI; and he’s an author of the International Scientific Report on the Safety of Advanced AI. Bommasani recently completed his PhD in computer science at Stanford University, where he was advised by Percy Liang and Dan Jurafsky and was funded by Stanford’s Gerald J. Lieberman Fellowship and the NSF Graduate Research Fellowship.
Archive
From the calendar years 1997–2025.
- Fall 2025
- Spring 2025
- Fall 2024
- Spring 2024
- Fall 2023
- Spring 2023
- Fall 2022
- Summer 2022
- Spring 2022
- Fall 2021
- Summer 2021
- Spring 2021
- Fall 2020
- Spring 2020
- Fall 2019
- Summer 2019
- Spring 2019
- Fall 2018
- Summer 2018
- Spring 2018
- Fall 2017
- Summer 2017
- Spring 2017
- Fall 2016
- Summer 2016
- Spring 2016
- Fall 2015
- Spring 2015
- Fall 2014
- Spring 2014
- Fall 2013
- Spring 2013
- Fall 2012
- Spring 2012
- Fall 2011
- Spring 2011
- Fall 2010
- Spring 2010
- Fall 2009
- Spring 2009
- Fall 2008
- Spring 2008
- Fall 2007
- Spring 2007
- Fall 2006
- Spring 2006
- Fall 2005
- Spring 2005
- Fall 2004
- Spring 2004
- Fall 2003
- Spring 2003
- Fall 2002
- Spring 2002
- Spring 2001
- Fall 2000
- Spring 2000
- Fall 1999
- Spring 1999
- Fall 1998
- Spring 1998
- Fall 1997
- Summer 1997
- Spring 1997