Details:
WHERE: B-17 Hackerman Hall, unless otherwise noted
WHEN: 10:30 a.m. refreshments available, seminar runs from 10:45 a.m. to 12 p.m., unless otherwise noted
Recordings will be available online after each seminar.
Schedule of Speakers
Click to expand for talk title, abstract, and speaker biography.
Computer Science Seminar Series
“Generative AI for (Molecular) Sciences”
Abstract: Massive efforts are under way to develop and adapt generative AI to solve any and all inferential and design tasks across engineering and science disciplines. Framing or reframing problems in terms of distributional modeling can bring a number of benefits, but also comes with substantial technical and statistical challenges. Tommi S. Jaakkola’s work has focused on advancing machine learning methods for controlled generation of complex objects, ranging from molecular interactions (e.g., docking) and 3D structures to new materials tailored to exhibit desirable characteristics such as carbon capture. In this talk, Jaakkola will cover a few research vignettes along with their specific challenges, focusing on diffusion and flow models that surpass traditional or alternative approaches to docking, protein design, or conformational ensembles. Time permitting, he will highlight general challenges and opportunities in this area.
Speaker Biography: Tommi S. Jaakkola is the Thomas Siebel Professor of Electrical Engineering and Computer Science in the Massachusetts Institute of Technology’s Department of Electrical Engineering and Computer Science and the MIT Institute for Data, Systems, and Society; he is also an investigator at the MIT Computer Science and Artificial Intelligence Laboratory. He is a fellow of the Association for the Advancement of Artificial Intelligence with many awards for his publications. His research covers how machines can learn, generate, or control and do so at scale in an efficient, principled, and interpretable manner, from foundational theory to modern design challenges. Over the past several years, Jaakkola’s applied work has been focused on molecular modeling and design.
Computer Science Seminar Series
“Recycling Fine-Tuned Models to Pretrain (on Loss Spaces, Fusing, and Evolving Pretraining)”
Abstract: Prohibitive pretraining costs makes pretraining research a rare sight—however, this is not the case for analyzing, using, and fine-tuning those models. This talk focuses on one option to improve models in a scientific way, in small measurable steps; specifically, it introduces the concept of merging multiple fine-tuned/parameter-efficient fine-tuning models into one and discusses works tackling what we understand about it, how it works, more up-to-date methods, and how iteratively merging models may allow collaborative continual pretraining.
Speaker Biography: Leshem Choshen is a postdoctoral researcher at the Massachusetts Institute of Technology and IBM who aims to study model development openly and collaboratively, allow feasible pretraining research, and evaluate efficiently. To do so, they co-created model merging, TIES merging, the BabyLM Challenge. They were chosen for postdoctoral Rothschild and Fulbright fellowships and received a Best PhD Thesis Award from the Israeli Association for Artificial Intelligence, as well as a Blavatnik Prize for Computer Science. With broad natural language processing and machine learning interests, Choshen has also worked on reinforcement learning, understanding how neural networks learn, and Project Debater, the first machine system capable of holding a formal debate (as of 2019), which was featured on the cover of Nature.
Please note this seminar will take place in 107 Malone Hall at 11:00 a.m. Refreshments will be served at 10:45 a.m.
Computer Science Seminar Series
“Learning and Control for Safety, Efficiency, and Resiliency of Embodied AI”
Abstract: The broad agenda of Fei Miao’s work is to develop the foundations for the science of embodied AI—that is, to assure safety, efficiency, robustness, and security of AI systems by integrating learning, optimization, and control. Miao’s research interests span several technical fields, including multi-agent reinforcement learning, robust optimization, uncertainty quantification, control theory, and game theory. Application areas include connected and autonomous vehicles (CAVs), intelligent transportation systems and transportation decarbonization, smart cities, and power networks. Miao’s research experience and current ongoing projects include robust reinforcement learning and control, uncertainty quantification for collaborative perception, game theoretical analysis for the benefit of information sharing for CAVs, data-driven robust optimization for efficient mobile cyber-physical systems (CPS), conflict resolution of smart cities, and resilient control of CPS under attacks. In addition to system modeling, theoretical analysis, and algorithmic design, Miao’s work involves experimental validation in real urban transportation data, simulators, and small-scale autonomous vehicles.
Speaker Biography: Fei Miao is a Pratt & Whitney Associate Professor in the School of Computing and courtesy faculty in the Department of Electrical and Computer Engineering at the University of Connecticut. She is also affiliated with the Pratt & Whitney Institute for Advanced Systems Engineering. Before joining UConn, Miao was a postdoctoral researcher in the General Robotics, Automation, Sensing, & Perception Lab and the Penn Research In Embedded Computing and Integrated Systems Engineering Center with George J. Pappas and Daniel D. Lee in the Department of Electrical and Systems Engineering at the University of Pennsylvania. Miao earned her PhD—as well as the Charles Hallac and Sarah Keil Wolf Award for the best doctoral dissertation—in electrical and systems engineering in 2016, along with a dual Master’s degree in statistics from the Wharton School at the University of Pennsylvania. She received her bachelor’s degree of science from Shanghai Jiao Tong University in 2010 with a major in automation and a minor in finance.
Past Speakers
Click to expand for recording, date, abstract, and speaker biography.
Computer Science Seminar Series
October 10, 2024
Abstract: Large-scale pretraining has become the standard solution to automated reasoning over text and/or visual perception. But how far does this approach get us to systems that generalize to language use in realistic multi-agent situated interactions? First, Alane Suhr will talk about existing work in evaluating the spatial and compositional reasoning capabilities of current multimodal language models. Then Suhr will talk about how these benchmarks miss a key aspect of real-world situated interactions: joint embodiment. Suhr will discuss how joint embodiment in a shared world supports perspective-taking, an underlooked aspect of situated reasoning, and introduce a new environment and benchmark for studying the influence of perspective-taking on language use in interaction.
Speaker Biography: Alane Suhr is an assistant professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Also affiliated with the Berkeley Artificial Intelligence Research Lab, Suhr researches language use and learning in situated, collaborative interactions. This includes developing datasets and environments that support such interactions; designing and evaluating models that participate in collaborative interactions with human users by perceiving, acting, and using language; and developing learning algorithms for training such models from signals acquired in these interactions. Suhr received a BS in computer science and engineering from the Ohio State University in 2016 and a PhD in computer science from Cornell University in 2022.
Institute for Assured Autonomy & Computer Science Seminar Series
September 17, 2024
Abstract: Despite our tremendous progress in AI, current AI systems—including large language models—still cannot adequately understand humans and flexibly interact with humans in real-world settings. One of the key missing ingredients is Theory of Mind, which is the ability to understand humans’ mental states from their behaviors. In this talk, Tianmin Shu will discuss how we can engineer human-level machine Theory of Mind. He will first show how we can leverage insights from cognitive science studies to develop model-based approaches for physically grounded, multimodal Theory of Mind. He will then discuss how we can improve multimodal embodied AI assistance based on Theory of Mind reasoning. Finally, he will briefly talk about exciting future work toward building open-ended Theory of Mind models for real-world AI assistants.
Speaker Biography: Tianmin Shu is an assistant professor of computer science at the Johns Hopkins University, with a secondary appointment in the university’s Department of Cognitive Science. His research goal is to advance human-centered AI by engineering human-level machine social intelligence to build socially intelligent systems that can understand, reason about, and interact with humans in real-world settings. Shu’s work has received multiple awards, including an Outstanding Paper Award at the 2024 Annual Meeting of the Association for Computational Linguistics and the 2017 Cognitive Science Society Computational Modeling Prize in Perception/Action. His research has also been covered by multiple media outlets, such as New Scientist, Science News, and VentureBeat. Shu received his PhD from the University of California, Los Angeles in 2019. Before joining Johns Hopkins, he was a research scientist at the Massachusetts Institute of Technology.
Archive
From the calendar years 1997–2024.
- Spring 2024
- Fall 2023
- Spring 2023
- Fall 2022
- Summer 2022
- Spring 2022
- Fall 2021
- Summer 2021
- Spring 2021
- Fall 2020
- Spring 2020
- Fall 2019
- Summer 2019
- Spring 2019
- Fall 2018
- Summer 2018
- Spring 2018
- Fall 2017
- Summer 2017
- Spring 2017
- Fall 2016
- Summer 2016
- Spring 2016
- Fall 2015
- Spring 2015
- Fall 2014
- Spring 2014
- Fall 2013
- Spring 2013
- Fall 2012
- Spring 2012
- Fall 2011
- Spring 2011
- Fall 2010
- Spring 2010
- Fall 2009
- Spring 2009
- Fall 2008
- Spring 2008
- Fall 2007
- Spring 2007
- Fall 2006
- Spring 2006
- Fall 2005
- Spring 2005
- Fall 2004
- Spring 2004
- Fall 2003
- Spring 2003
- Fall 2002
- Spring 2002
- Spring 2001
- Fall 2000
- Spring 2000
- Fall 1999
- Spring 1999
- Fall 1998
- Spring 1998
- Fall 1997
- Summer 1997
- Spring 1997