WHERE: Hackerman B-17
WHEN: 10:45 a.m. – 12 p.m.

Recordings will be available online after each seminar.


Schedule of Speakers

Click to expand for talk title, abstract, and speaker bio.

View the recording >> 

“Learning 3D Modeling and Simulation From and For the Real World”

Abstract: Humans have extraordinary capabilities of comprehending and reasoning about our 3D visual world. With just a few casual glances, we can grasp the 3D structure and appearance of our surroundings and imagine all sorts of “what-if” scenarios in our minds. Existing 3D systems, in contrast, cannot. They lack structural understanding of the world and often break apart when moved to unconstrained, partially-observed, and noisy environments.

In this talk, I will present my efforts on developing robust computational models that can perceive, reconstruct, and simulate dynamic 3D surroundings from sparse and noisy real-world observations. I will first show that by infusing structural priors and domain knowledge into existing algorithms, we can make them more robust and significantly expand their applicable domains, opening up new avenues for 3D modeling. Then, I will present how to construct a composable, editable, and actionable digital twin from sparse, real-world data that allows robotic systems (e.g., self-driving vehicles) to simulate counterfactual scenarios for better decision-making. Finally, I will discuss how to extrapolate beyond these two efforts and build intelligent 3D systems that are accessible to everyone and applicable to other other real-world settings.

Bio: Wei-Chiu Ma is a Ph.D. candidate at MIT, working with Antonio Torralba and Raquel Urtasun. His research lies in the intersection of computer vision, robotics, and machine learning, with a focus on in-the-wild 3D modeling and simulation and their applications to self-driving vehicles. Wei-Chiu is a recipient of the Siebel Scholarship and his work has been covered by media outlets such as WIRED, DeepLearning.AI, MIT News, etc. Previously, Wei-Chiu was a Sr. Research Scientist at Uber ATG R&D. He received his M.S. in Robotics from CMU where he was advised by Kris Kitani and B.S. in EE from National Taiwan University.

View the recording >> 

“Towards a Statistical Foundation for Human-AI Collaboration”

Abstract: Artificial intelligence is being deployed in ever more consequential settings such as healthcare and autonomous driving. Thus, we must ensure that these systems are safe and trustworthy. One near-term solution is to involve a human in the decision-making process and enable the system to ask for help in difficult or high-risk scenarios.  I will present recent advances in the “learning to defer” paradigm: decision-making responsibility is allocated to either a human or model, depending on who is more likely to take the correction action.  Specifically, I will present our novel formulations that better model the human collaborator’s expertise and that can support multiple human decision makers.  I will also describe paths for future work, including improvements to data efficiency and applications to language models.

Bio: Eric Nalisnick is an assistant professor at the University of Amsterdam.  He is interested in building safe and robust intelligent systems with a human-centered design. To accomplish this, his research develops novel machine learning techniques, which are often rooted in probabilistic modeling and computational statistics. Questions of particular interest are: how can we incorporate a human’s prior knowledge?, how can we detect when the system is failing?, and how to best combine human and machine decision making?  He previously was a postdoctoral researcher at the University of Cambridge and a PhD student at the University of California, Irvine.  Eric has also held research positions at DeepMind, Microsoft, Twitter, and Amazon.  He has served as an area chair for (as well as published in) all major machine learning conferences: NeurIPS, ICML, ICLR, AIStats, and UAI.  Eric has been awarded the distinctions of ELLIS scholar and NWO Veni fellow.

View the recording >>

“Collaborative, Communal, & Continual Machine Learning”

Abstract: Pre-trained models have become a cornerstone of machine learning thanks to the fact that they can provide improved performance with less labeled data on downstream tasks. However, these models are typically created by resource-rich research groups that unilaterally decide how a given model should be built, trained, and released, after which point it is never updated. In contrast, open-source development has demonstrated that it is possible for a community of contributors to work together to iteratively build complex and widely used software. This kind of large-scale distributed collaboration is made possible through a mature set of tools including version control and package management. In this talk, I will discuss a research focus in my group that aims to make it possible to build machine learning models in the way that open-source software is developed. Specifically, I will discuss our preliminary work on merging multiple models while retaining their individual capabilities, patching models with cheaply-communicable updates, designing modular model architectures, and tracking changes through a version control system for model parameters. I will conclude with an outlook on how the field will change once truly collaborative, communal, and continual machine learning is possible.

Bio: Colin Raffel is an Assistant Professor at UNC Chapel Hill and a Faculty Researcher at Hugging Face. His work aims to make it easy to get computers to do new things. Consequently, he works mainly on machine learning (enabling computers to learn from examples) and natural language processing (enabling computers to communicate in natural language). He received his Ph.D. from Columbia University in 2016 and spent five years as a research scientist at Google Brain.

View the recording >>

“Looking past the Abstractions: Characterizing Information Flow in Real-World Systems”

Abstract: Abstractions have proven essential for us to manage computing systems that are constantly growing in size and complexity. However, as core design primitives are obscured, these abstractions can engender new security challenges. My research investigates these abstractions and the underlying core functionalities to identify the implicit flow violations in modern computing systems.

In this talk, I will detail my efforts in characterizing flow violations, investigating attacks leveraging them, and defending against the attacks. I will first describe how the “stateless” abstraction of serverless computing platforms masks a reality in which functions are cached in memory for long periods of time, enabling attackers to gain quasi-persistence and how such attacks can be investigated through building serverless-aware provenance collection mechanisms. Then I will further discuss how IoT automation platforms abstract the underlying information flows among rules installed within a smart home. I will present my findings on modeling and discovering inter-rule flow violations through building an information flow graph for smart homes. These efforts demonstrate how practical and widely deployable secure systems can be built through understanding the requirements of systems as well as identifying the root cause of violations of these requirements.

Bio: Pubali Datta is a PhD candidate at the University of Illinois Urbana-Champaign where she is advised by Professor Adam Bates in the study of system security and privacy. Pubali has conducted research on a variety of security topics, including serverless cloud security, IoT security, system auditing and provenance. Her dissertation is in the area of serverless cloud security, particularly in designing information flow control, access control and auditing mechanisms for serverless platforms. She was selected as an EECS Rising Star in 2020 and was invited to talk at Rising Stars in Computer Science talk series in 2022. Pubali has participated in graduate internships at Samsung Research America, SRI International and VMware. She will earn her Ph.D in Computer Science from the University of Illinois Urbana-Champaign in the Spring of 2023.

View the recording >>

“Privacy-Preserving Accountability Online” 

Talk Abstract:
Technologies that enable confidential communication and anonymous authentication are important for improving privacy for users of internet services. Unfortunately, encryption and anonymity, while good for privacy, make it hard to hold bad actors accountable for misbehavior. Internet services rely on seeing message content to detect spam and other harmful content; services must also be able to identify users to attribute and respond to abuse complaints. This tension between privacy and accountability leads to one of two suboptimal outcomes: Services require excessive trust in centralized entities to hold users accountable for misbehavior, or services leave themselves and/or their users open to abuse.

In this talk, I will highlight two deployed applications, end-to-end encrypted messaging and anonymous web browsing, where this tension arises and how gaps in accountability can and do lead to real-world attacks. I will discuss how I have addressed this tension through the design of new cryptographic protocols that preserve user privacy while also providing mechanisms for holding bad actors accountable. In particular, I will cover new protocols for anonymous blocklisting, one-time-use credentials, and transparent key infrastructure.

Speaker Bio:
Nirvan Tyagi is a Ph.D. candidate in the Department of Computer Science at Cornell University, advised by Tom Ristenpart and based at the NYC Cornell Tech campus. Over the past two years, he has held visiting student appointments at University of Washington and Stanford. His research interests span broadly across computer security, applied cryptography, and systems. Most recently, his focus has been on building systems that provide strong user privacy while also providing appropriate accountability against misbehavior. He is the recipient of an NSF Graduate Research Fellowship, a Facebook Ph.D. Fellowship, and a Digital Life Initiative Doctoral Fellowship. Nirvan received an Early Career Award at CRYPTO 2020 and his work on one-time-use credentials is being standardized by the IETF.

View the recording >>

“Cognitively Inspired Machine Social Intelligence”

Abstract: Despite our tremendous progress in AI, current AI systems still cannot adequately understand humans and flexibly interact with humans in real-world settings. The goal of my research is to build AI systems that can understand and cooperatively interact with humans in the real world. My hypothesis is that to achieve this goal, we need human-level machine social intelligence and that we can take inspiration from the studies of social cognition to engineer such social intelligence. To transfer insights from social cognition to real-world systems, I develop a research program for cognitively inspired machine social intelligence, in which I first i) build computational models to formalize the ideas and theories from social cognition, ii) develop new computational tools and AI methods to implement those models, and finally iii) apply those models to real-world systems such as assistive robots.

In this talk, I will discuss the progress I have made in my research program toward transforming those insights into real systems. I will first introduce the cognitively inspired approaches for the two key building blocks of machine social intelligence: social scene understanding and multi-agent cooperation. I will then demonstrate how these cognitively inspired approaches can enable the engineering of socially intelligent embodied AI assistants that can help people in their homes. Finally, I will also discuss future directions I plan to explore in order to reach the ultimate goal of engineering human-level machine social intelligence for real-world AI applications, such as smart cities, healthcare, and social VR.

Bio: Dr. Tianmin Shu is a postdoctoral associate in the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology, working with Josh Tenenbaum and Antonio Torralba. His research goal is to advance human-centered AI by engineering human-level machine social intelligence to build socially intelligent systems that can understand, reason about, and interact with humans in real-world settings. His work received the 2017 Cognitive Science Society Computational Modeling Prize in Perception/Action and several best paper awards at NeurIPS workshops and an IROS workshop. His research has also been covered by multiple media outlets, such as New Scientist, Science News, and VentureBeat. He received his PhD degree from the University of California, Los Angeles, in 2019.

View the recording >> 

Distance-Estimation in Modern Graphs: Algorithms and Impossibility”

Abstract: The size and complexity of today’s graphs present challenges that necessitate the discovery of new algorithms. One central area of research in this endeavor is computing and estimating distances in graphs. In this talk I will discuss two fundamental families of distance problems in the context of modern graphs: Diameter/Radius/Eccentricities and Hopsets/Shortcut Sets.

The best known algorithm for computing the diameter (largest distance) of a graph is the naive algorithm of computing all-pairs shortest paths and returning the largest distance. Unfortunately, this can be prohibitively slow for massive graphs. Thus, it is important to understand how fast and how accurately the diameter of a graph can be approximated. I will present tight bounds for this problem via conditional lower bounds from fine-grained complexity.

Secondly, for a number of settings relevant to modern graphs (e.g. parallel algorithms, streaming algorithms, dynamic algorithms), distance computation is more efficient when the input graph has low hop-diameter. Thus, a useful preprocessing step is to add a set of edges (a hopset) to the graph that reduces the hop-diameter of the graph, while preserving important distance information. I will present progress on upper and lower bounds for hopsets.

Bio: Nicole Wein is a Simons Postdoctoral Leader at DIMACS at Rutgers University. Previously, she obtained her Ph.D. from MIT advised by Virginia Vassilevska Williams. She is a theoretical computer scientist and her research interests include graph algorithms and lower bounds including in the areas of distance-estimation algorithms, dynamic algorithms, and fine-grained complexity.

View the recording >> 

Trustworthy AI via Formal Verification and Adversarial Testing”

Abstract: To apply deep learning to safety-critical tasks, we need to formally verify their trustworthiness, ensuring properties like safety, security, robustness, and correctness. Unfortunately, modern deep neural networks (DNNs) are largely “black boxes,” and existing tools can hardly formally reason about them. In this talk, I will present a new framework for trustworthy AI, relying on novel methods for formal verification and adversarial testing of DNNs. In particular, I will first introduce a novel framework called “linear bound propagation methods” to enable efficient formal verification of DNNs, with an example of rigorously proving their safety and robustness. This framework exploits the structure of this NP-hard verification problem to solve it efficiently, and achieves up to three orders of magnitude speedup compared to traditional verification algorithms. My work leads to the open-source α,β-CROWN verifier, the winner of the 2021 and 2022 International Verification of Neural Networks Competitions (VNN-COMP), with applications including image classification, image segmentation, reinforcement learning, and computer systems. Besides verification, I will discuss the complementary problem of disproving the trustworthiness of AI-based systems using adversarial testing, including black-box adversarial attacks to DNNs and theoretically-principled attacks to deep reinforcement learning. Finally, I will conclude my talk with an outlook on verifying AI models as building blocks for complex systems in various applications and address challenging engineering problems using the bound propagation-based verification framework.

Bio: Huan Zhang is a postdoctoral researcher at Carnegie Mellon University, working with Professor Zico Kolter. He obtained his PhD in Computer Science at UCLA in 2020, advised by Professor Cho-Jui Hsieh. Huan’s research aims to build trustworthy AI systems that can be safely and reliably used in mission-critical tasks, with a focus on using formal verification techniques to give provable performance guarantees on machine learning systems. He is the leader of a multi-institutional team developing the α,β-CROWN neural network verifier, which won VNN-COMP 2021 and VNN-COMP 2022. He has received several awards, including an IBM PhD fellowship, the 2021 Adversarial Machine Learning Rising Star Award, and a Schmidt Futures AI2050 Early Career Fellowship.

“Robust IoT Communication and Sensing with Extreme Efficiency”

Abstract: The Internet of Things (IoT) has the potential to revolutionize how we live and work by bridging the physical and digital world, but the current battery-based architecture limits its scalability and poses environmental challenges. To overcome these limitations, the next trillion IoT devices should be battery-free, maintenance-free, and low-cost, which requires a fundamental rethink of wireless networking, a core component of IoT. In this talk, I will share my research on developing new system and radio architectures for battery-free IoT communication and sensing that significantly improve energy efficiency, reliability, and cost. My talk will begin with a focus on how to enable extremely low-power communication for consumer IoT. I will discuss new radio architectures that reduce power consumption by orders of magnitude, as well as an asymmetric communication system architecture that can reuse pervasively deployed Wi-Fi devices as infrastructure to connect IoT devices to the internet. This approach brings battery-free IoT to consumers’ homes without requiring new wireless infrastructures. Moving on to industrial IoT, I will explain how to address the critical issue of high reliability through the development of a new magnetic RFID system and robust RFID localization design. I will explain how the long-range magnetic RFID system can achieve two orders of magnitude lower object identification error, solving the reliability problem that has hindered the widespread adoption of battery-free RFID systems in industry. Finally, I will briefly discuss my other contributions, including the creation of the first massive MIMO millimeter-wave software-defined radio and the development of a wireless brain-machine interface. These programmable experimental platforms open up exciting opportunities for next-generation IoT applications. Building upon past innovations, I will discuss the exciting opportunities presented by next-generation technologies such as battery-free IoT, the Internet of Bodies, and experimental infrastructures for wireless systems. These innovations offer a promising avenue towards a sustainable and scalable IoT future.

Bio: Renjie Zhao is a PhD candidate in the ECE department at the University of California San Diego, currently in his fifth year of study under the guidance of Professor Xinyu Zhang. Prior to pursuing his PhD, Zhao received his BE degree from Shanghai Jiao Tong University in 2018. Zhao’s research interests are centered around wireless systems and networking, with a particular focus on next-generation cellular networks, IoT, and mobile and ubiquitous computing. Zhao’s research has been published in several top conferences, including ACM SIGCOMM, MobiCom, and USNIEX NSDI. In recognition of his work on massive MIMO millimeter-wave software radio, Zhao was awarded the Best Paper Award at ACM MobiCom 2020. His work on this project has also been highlighted by ACM GetMobile, the top pick of the ACM SIGMOBILE area.

View the recording >> 

“Rigorously Tested & Reliable Machine Learning for Health”

ABSTRACT: How do we make machine learning as rigorously tested and reliable as any medication or diagnostic test?

Machine learning (ML) has the potential to improve decision-making in healthcare, from predicting treatment effectiveness to diagnosing disease. However, standard retrospective evaluations can give a misleading sense of how well models will perform in practice. Evaluation of ML-derived treatment policies can be biased when using observational data, and predictive models that perform well in one hospital may perform poorly in another.

In this talk, I will introduce new tools to proactively assess and improve the reliability of machine learning in healthcare. A central theme will be the application of external knowledge, including review of patient records, incorporation of limited clinical trial data, and interpretable stress tests. Throughout, I will discuss how evaluation can directly inform model design.

BIO: Michael Oberst is a final-year PhD candidate in Computer Science at MIT. His research focuses on making sure that machine learning in healthcare is safe and effective, using tools from causal inference and statistics. His work has been published at a range of machine learning venues (NeurIPS, ICML, AISTATS, KDD), including work with clinical collaborators from Mass General Brigham, NYU Langone, and Beth Israel Deaconess Medical Center. He has also worked on clinical applications of machine learning, including work on learning effective antibiotic treatment policies (published in Science Translational Medicine). He earned his undergraduate degree in Statistics at Harvard.

“Raising the Stakes: Reliably Deploying Machine Learning in Critical Settings”

ABSTRACT: Machine learning systems are increasingly used in high-stakes settings, such as healthcare, where their impact on people’s lives can be profound. However, current methods often suffer from biases, fragility, and impracticality, which can limit their effectiveness.

In this talk, I will present my research efforts on addressing these challenges and improving the robustness and reliability of machine learning systems in critical settings. First, I will show how modeling human biases in data acquisition can enhance the sample efficiency of machine learning systems in healthcare, resulting in robust models and interpretable data collection pipelines. Second, I will present a framework for continually repairing deployed models that preserves their performance on existing data while adapting to new situations. Finally, I will illustrate how uncertainty quantification can improve the reliability of complex machine learning systems by providing performance estimates with statistical guarantees.

BIO: Swami Sankaranarayanan is a Postdoctoral Associate at MIT’s Computer Science and Artificial Intelligence Laboratory, where he works with Phillip Isola and Marzyeh Ghassemi. Swami focuses on core challenges in deploying computer vision and machine learning systems in critical settings. His work has appeared at top venues such as CVPR, ICCV,  ICLR, NeurIPS, AAAI, and PNAS. He is the lead organizer of the upcoming 2023 ICML workshop on Deployment Challenges for Generative AI. Swami received his PhD in Electrical Engineering from the University of Maryland, College Park in 2018, where he was advised by Dr. Rama Chellappa. Previously, Swami was a Research Scientist at Butterfly Network, where he commercialized an AI-based clinically diagnostic application approved by the FDA. He was awarded a department level dissertation award for his PhD and his work has been covered by media outlets such as MIT News and Scientific American.

View the recording >> 

“Intelligent Health Monitoring in the Home”

ABSTRACT: Delivering health care to patients in their homes is shaping the future of healthcare, as it offers better access to health care for people who live far from hospitals. It also facilitates the early detection of diseases and the prevention of complications that might otherwise require hospitalization, while simultaneously reducing costs for patients and healthcare systems. Nevertheless, the utilization of machine learning and health sensors for clinical purposes within the home environment requires addressing certain challenges, such as simplifying compliance procedures, ensuring that the digital representations of patients are comparable to the established medical gold standard, and preserving data privacy.

In this talk, I will discuss how to develop new algorithms and models with health sensors to capture rich, continuous representations for in-home healthcare applications that address all these challenges. Firstly, I will introduce an AI-powered digital biomarker for Parkinson’s disease that can detect the disease, estimate its severity, and longitudinally track its progression using nocturnal breathing data, objectively and sensitively. Secondly, I will introduce a technology that enables contactless monitoring of blood oxygen saturation in patients’ homes using wireless signals. Finally, I will present a model designed to capture the daily-life activities of elderly individuals who may be experiencing conditions such as Alzheimer’s or dementia. It enables clinicians and caregivers to remotely monitor health-related conditions, allowing them to provide care when needed.

BIO: Yuan Yuan is a Postdoc Associate at the Computer Science & Artificial Intelligence Lab (CSAIL) of the Massachusetts Institute of Technology (MIT), and is also affiliated with Brigham and Women’s Hospital (BWH) at Harvard Medical School. She obtained her PhD degree from the Hong Kong University of Science and Technology and has also been a visiting research scholar at the Robotics Institute of Carnegie Mellon University. Her research interests lie in the fields of machine learning, computer vision, and AI for healthcare, with a specific focus on contactless health monitoring and developing digital biomarkers for neurological diseases using machine learning. Her research has been widely covered in the media, including Forbes, The Washington Post, BBC, TechCrunch, Engadget, etc. Her work on an AI-powered biomarker for Parkinson’s disease has been recognized as one of the ten breakthroughs and critical developments in Nature Medicine’s Notable Advances 2022. Algorithms developed in her work are now being deployed in many hospitals and pharmaceutical companies, including BlueRock Therapeutics, in their clinical trials.

View the recording >> 

“Revealing the Unknown: Securing Communications Against Adversary-Controlled Communication Infrastructure

ABSTRACT: Commercial 4G and 5G wireless networks promise to transform Department of Defense (DOD) communications. However, using these commercial wireless networks entails unprecedented reliance on untrusted and unknown communications infrastructure, including wireless base stations that connect directly to devices and the internet infrastructure that underlies wireless communications. The core problem is that unknown infrastructure potentially exposes communications to adversaries who can recognize, disrupt, or extract intelligence even from encrypted or disguised communications.

This talk will discuss two new directions for securing wireless communications through identifying and avoiding adversary-controlled infrastructure. First, I will discuss using deep learning to recognize 5G base station vendors in seconds, combatting the potential for Chinese intelligence to control Huawei and ZTE base stations anywhere in the world. Second, I will talk about path analytics I designed and implemented to identify communication paths that do not traverse adversary-controlled networks or geographic regions, helping to keep DOD communications away from sophisticated network intelligence systems that our adversaries possess. I will end the talk by discussing my goal of combining these capabilities in the next two years to provide end-to-end adversary avoidance routing.

BIO: Alex Marder is an assistant research scientist at UC San Diego’s Center for Applied Internet Data Analysis (CAIDA). Prior to that, he was a postdoctoral fellow at UC San Diego/CAIDA, mentored by Kimberly Claffy and Alex C. Snoeren. He obtained his PhD from the University of Pennsylvania, where he was advised by Jonathan M. Smith. His research focuses on using empirical measurements to evaluate the security, resilience, and performance of the internet.

View the recording >>

“Perceiving Humans in 4D”

ABSTRACT: From the moment we open our eyes, we are surrounded by people. By observing the people around us, we learn how to interact with them and the world. To create intelligent agents with similar capabilities, it is crucial to endow them with a perceptual system that can interpret and understand human behavior from visual observations. These observations are streams of two-dimensional images; however, the actual underlying state of humans is 4D—they have 3D bodies that move over time. In this talk, I will present my work on perceiving humans in 4D from video. This includes estimating their articulated 3D body pose, tracking them over time and recovering a 4D reconstruction that is consistent with their spatial environment. I will highlight the limitations of systems that only operate in the space of image pixels and showcase the benefits of reasoning in 4D.

BIO: Georgios Pavlakos is a postdoctoral scholar at UC Berkeley, advised by Angjoo Kanazawa and Jitendra Malik. His research interests include computer vision, machine learning, and robotics. He completed his PhD in Computer Science at the University of Pennsylvania with his advisor, Kostas Daniilidis. He has spent time at Max Planck Institute with Michael Black and at Facebook Reality Labs. His PhD dissertation received the Morris and Dorothy Rubinoff Award for the Best Computer Science Dissertation at UPenn.

View the recording >>

“Learning to Recreate Reality in 3D”

ABSTRACT: Despite its tremendous impact, 2D media (e.g., photos and videos) remain “static” snapshots of the world that don’t allow us to change our viewpoints or interact with the captured scene. In contrast, modeling the world in 3D can enable interactive experiences such as walking around the scene and manipulating objects. However, traditional pipelines of designing 3D media content are time-consuming and require expert knowledge. In my research, I aim to democratize 3D media by building intelligent systems that learn to synthesize realistic 3D content. Toward this goal, I seek answers to fundamental 3D learning problems, including 1) how to represent 3D data, 2) how to train 3D generative models from 2D inputs, and 3) how to generate large, compositional scenes. My work has far-reaching implications, as 3D reconstruction and generation technologies have broad applications across fields such as robotics, autonomous driving, and medical imaging.

BIO: Jeong Joon (JJ) Park is a postdoctoral researcher at Stanford University, working with Professors Leonidas Guibas and Gordon Wetzstein. His main research interests lie in the intersection of computer vision, graphics, and machine learning, where he studies realistic reconstruction and synthesis of 3D scenes using neural and physical representations. He did his PhD in computer science at the University of Washington, Seattle, under the supervision of Professor Steve Seitz, during which he was supported by the Apple AI/ML Fellowship. He is the lead author of DeepSDF, which introduced neural implicit representation and made a profound impact on 3D computer vision. He is fortunate to have worked with great collaborators from his academic institutions and internships with Adobe, Meta, and Apple. Prior to his PhD, he received his Bachelor of Science from the California Institute of Technology. More information can be found on his webpage.

View the recording >>

“Scalable Robot Intelligence: Self-Supervised Learning Through Generation”

ABSTRACT: Rapid advances in deep learning have resulted in promising techniques for robots to boost their capabilities to perceive, reason, and act by leveraging large models and massive datasets. However, the scalability of existing robot learning methods is severely limited by the manual labor and domain knowledge that humans can provide. To acquire general-purpose skills for solving a broad range of tasks, intelligent robots need scalable methods to collect and learn from rich data without extensive human supervision.

In this talk, I will present my research on scaling up robot learning through the autonomous generation of environments, goals, and tasks. I will start by describing how to leverage procedural content generation for learning robust skills that can handle the variety and uncertainty of the real world. Then, I will present algorithms that train robots to effectively reuse skills learned from prior experiences for novel sequential tasks by learning to generate reachable subgoals. Finally, I will demonstrate how to enable robots to discover a repertoire of novel skills by adaptively generating tasks during training. The acquired skills can be used for solving a variety of complex tasks such as tool use and sequential manipulation based on raw sensory inputs.

BIO: Kuan Fang is a postdoctoral researcher in the Department of Electrical Engineering and Computer Sciences at UC Berkeley, working with Sergey Levine. He received his PhD in electrical engineering from Stanford University, advised by Fei-Fei Li and Silvio Savarese. His research interests lie at the intersection of robotics, computer vision, and machine learning, with a focus on developing data-driven methods to enable intelligent robots to operate in unstructured environments. He is a recipient of the Stanford Graduate Fellowship and the Computing Innovation Fellowship.

View the recording >>

Institute of Assured Autonomy & Computer Science Seminar Series

May 16, 2023

Abstract: Scale appears to be the winning recipe in today’s leaderboards. And yet, extreme-scale neural models are (un)surprisingly brittle and make errors that are often nonsensical and even counterintuitive. In this talk, I will argue for the importance of knowledge, especially commonsense knowledge, as well as inference-time reasoning algorithms, and demonstrate how smaller models developed in academia can still have an edge over larger industry-scale models, if powered with knowledge and/or reasoning algorithms.

Speaker Biography: Yejin Choi is Brett Helsel professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research director at AI2, overseeing the project Mosaic. Her research investigates a wide variety of problems across NLP and AI, including commonsense knowledge and reasoning, neural language (de)generation, language grounding with vision and experience, and AI for social good. She is a MacArthur Fellow and a co-recipient of the NAACL Best Paper Award in 2022, the ICML Outstanding Paper Award in 2022, the ACL Test of Time award in 2021, the CVPR Longuet-Higgins Prize (test of time award) in 2021, the NeurIPS Outstanding Paper Award in 2021, the AAAI Outstanding Paper Award in 2020, the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, IEEE AI’s 10 to Watch in 2016, and the ICCV Marr Prize (best paper award) in 2013. She received her PhD in computer science at Cornell University and BS in computer science and engineering at Seoul National University in Korea.


From the calendar years 1997-2022.