Details:

WHERE: Hackerman B-17
WHEN: 10:45 am – 12 pm

*Recordings will be available online after each seminar 

 

Schedule of Speakers

Click to expand for talk title, abstract, and speaker bio

View the recording >> 

“Learning 3D Modeling and Simulation From and For the Real World”

Abstract: Humans have extraordinary capabilities of comprehending and reasoning about our 3D visual world. With just a few casual glances, we can grasp the 3D structure and appearance of our surroundings and imagine all sorts of “what-if” scenarios in our minds. Existing 3D systems, in contrast, cannot. They lack structural understanding of the world and often break apart when moved to unconstrained, partially-observed, and noisy environments.

In this talk, I will present my efforts on developing robust computational models that can perceive, reconstruct, and simulate dynamic 3D surroundings from sparse and noisy real-world observations. I will first show that by infusing structural priors and domain knowledge into existing algorithms, we can make them more robust and significantly expand their applicable domains, opening up new avenues for 3D modeling. Then, I will present how to construct a composable, editable, and actionable digital twin from sparse, real-world data that allows robotic systems (e.g., self-driving vehicles) to simulate counterfactual scenarios for better decision-making. Finally, I will discuss how to extrapolate beyond these two efforts and build intelligent 3D systems that are accessible to everyone and applicable to other other real-world settings.

Bio: Wei-Chiu Ma is a Ph.D. candidate at MIT, working with Antonio Torralba and Raquel Urtasun. His research lies in the intersection of computer vision, robotics, and machine learning, with a focus on in-the-wild 3D modeling and simulation and their applications to self-driving vehicles. Wei-Chiu is a recipient of the Siebel Scholarship and his work has been covered by media outlets such as WIRED, DeepLearning.AI, MIT News, etc. Previously, Wei-Chiu was a Sr. Research Scientist at Uber ATG R&D. He received his M.S. in Robotics from CMU where he was advised by Kris Kitani and B.S. in EE from National Taiwan University.

View the recording >> 

“Towards a Statistical Foundation for Human-AI Collaboration”

Abstract: Artificial intelligence is being deployed in ever more consequential settings such as healthcare and autonomous driving. Thus, we must ensure that these systems are safe and trustworthy. One near-term solution is to involve a human in the decision-making process and enable the system to ask for help in difficult or high-risk scenarios.  I will present recent advances in the “learning to defer” paradigm: decision-making responsibility is allocated to either a human or model, depending on who is more likely to take the correction action.  Specifically, I will present our novel formulations that better model the human collaborator’s expertise and that can support multiple human decision makers.  I will also describe paths for future work, including improvements to data efficiency and applications to language models.

Bio: Eric Nalisnick is an assistant professor at the University of Amsterdam.  He is interested in building safe and robust intelligent systems with a human-centered design. To accomplish this, his research develops novel machine learning techniques, which are often rooted in probabilistic modeling and computational statistics. Questions of particular interest are: how can we incorporate a human’s prior knowledge?, how can we detect when the system is failing?, and how to best combine human and machine decision making?  He previously was a postdoctoral researcher at the University of Cambridge and a PhD student at the University of California, Irvine.  Eric has also held research positions at DeepMind, Microsoft, Twitter, and Amazon.  He has served as an area chair for (as well as published in) all major machine learning conferences: NeurIPS, ICML, ICLR, AIStats, and UAI.  Eric has been awarded the distinctions of ELLIS scholar and NWO Veni fellow.

View the recording >>

“Collaborative, Communal, & Continual Machine Learning”

Abstract: Pre-trained models have become a cornerstone of machine learning thanks to the fact that they can provide improved performance with less labeled data on downstream tasks. However, these models are typically created by resource-rich research groups that unilaterally decide how a given model should be built, trained, and released, after which point it is never updated. In contrast, open-source development has demonstrated that it is possible for a community of contributors to work together to iteratively build complex and widely used software. This kind of large-scale distributed collaboration is made possible through a mature set of tools including version control and package management. In this talk, I will discuss a research focus in my group that aims to make it possible to build machine learning models in the way that open-source software is developed. Specifically, I will discuss our preliminary work on merging multiple models while retaining their individual capabilities, patching models with cheaply-communicable updates, designing modular model architectures, and tracking changes through a version control system for model parameters. I will conclude with an outlook on how the field will change once truly collaborative, communal, and continual machine learning is possible.

Bio: Colin Raffel is an Assistant Professor at UNC Chapel Hill and a Faculty Researcher at Hugging Face. His work aims to make it easy to get computers to do new things. Consequently, he works mainly on machine learning (enabling computers to learn from examples) and natural language processing (enabling computers to communicate in natural language). He received his Ph.D. from Columbia University in 2016 and spent five years as a research scientist at Google Brain.

View the recording >>

“Looking past the Abstractions: Characterizing Information Flow in Real-World Systems”

Abstract:Abstractions have proven essential for us to manage computing systems that are constantly growing in size and complexity. However, as core design primitives are obscured, these abstractions can engender new security challenges. My research investigates these abstractions and the underlying core functionalities to identify the implicit flow violations in modern computing systems.

In this talk, I will detail my efforts in characterizing flow violations, investigating attacks leveraging them, and defending against the attacks. I will first describe how the “stateless” abstraction of serverless computing platforms masks a reality in which functions are cached in memory for long periods of time, enabling attackers to gain quasi-persistence and how such attacks can be investigated through building serverless-aware provenance collection mechanisms. Then I will further discuss how IoT automation platforms abstract the underlying information flows among rules installed within a smart home. I will present my findings on modeling and discovering inter-rule flow violations through building an information flow graph for smart homes. These efforts demonstrate how practical and widely deployable secure systems can be built through understanding the requirements of systems as well as identifying the root cause of violations of these requirements.

Bio: Pubali Datta is a PhD candidate at the University of Illinois Urbana-Champaign where she is advised by Professor Adam Bates in the study of system security and privacy. Pubali has conducted research on a variety of security topics, including serverless cloud security, IoT security, system auditing and provenance. Her dissertation is in the area of serverless cloud security, particularly in designing information flow control, access control and auditing mechanisms for serverless platforms. She was selected as an EECS Rising Star in 2020 and was invited to talk at Rising Stars in Computer Science talk series in 2022. Pubali has participated in graduate internships at Samsung Research America, SRI International and VMware. She will earn her Ph.D in Computer Science from the University of Illinois Urbana-Champaign in the Spring of 2023.

View the recording >>

“Privacy-Preserving Accountability Online” 

Talk Abstract:
Technologies that enable confidential communication and anonymous authentication are important for improving privacy for users of internet services. Unfortunately, encryption and anonymity, while good for privacy, make it hard to hold bad actors accountable for misbehavior. Internet services rely on seeing message content to detect spam and other harmful content; services must also be able to identify users to attribute and respond to abuse complaints. This tension between privacy and accountability leads to one of two suboptimal outcomes: Services require excessive trust in centralized entities to hold users accountable for misbehavior, or services leave themselves and/or their users open to abuse.

In this talk, I will highlight two deployed applications, end-to-end encrypted messaging and anonymous web browsing, where this tension arises and how gaps in accountability can and do lead to real-world attacks. I will discuss how I have addressed this tension through the design of new cryptographic protocols that preserve user privacy while also providing mechanisms for holding bad actors accountable. In particular, I will cover new protocols for anonymous blocklisting, one-time-use credentials, and transparent key infrastructure.

Speaker Bio:
Nirvan Tyagi is a Ph.D. candidate in the Department of Computer Science at Cornell University, advised by Tom Ristenpart and based at the NYC Cornell Tech campus. Over the past two years, he has held visiting student appointments at University of Washington and Stanford. His research interests span broadly across computer security, applied cryptography, and systems. Most recently, his focus has been on building systems that provide strong user privacy while also providing appropriate accountability against misbehavior. He is the recipient of an NSF Graduate Research Fellowship, a Facebook Ph.D. Fellowship, and a Digital Life Initiative Doctoral Fellowship. Nirvan received an Early Career Award at CRYPTO 2020 and his work on one-time-use credentials is being standardized by the IETF.

View the recording >>

“Cognitively Inspired Machine Social Intelligence”

Abstract: Despite our tremendous progress in AI, current AI systems still cannot adequately understand humans and flexibly interact with humans in real-world settings. The goal of my research is to build AI systems that can understand and cooperatively interact with humans in the real world. My hypothesis is that to achieve this goal, we need human-level machine social intelligence and that we can take inspiration from the studies of social cognition to engineer such social intelligence. To transfer insights from social cognition to real-world systems, I develop a research program for cognitively inspired machine social intelligence, in which I first i) build computational models to formalize the ideas and theories from social cognition, ii) develop new computational tools and AI methods to implement those models, and finally iii) apply those models to real-world systems such as assistive robots.

In this talk, I will discuss the progress I have made in my research program toward transforming those insights into real systems. I will first introduce the cognitively inspired approaches for the two key building blocks of machine social intelligence: social scene understanding and multi-agent cooperation. I will then demonstrate how these cognitively inspired approaches can enable the engineering of socially intelligent embodied AI assistants that can help people in their homes. Finally, I will also discuss future directions I plan to explore in order to reach the ultimate goal of engineering human-level machine social intelligence for real-world AI applications, such as smart cities, healthcare, and social VR.

Bio: Dr. Tianmin Shu is a postdoctoral associate in the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology, working with Josh Tenenbaum and Antonio Torralba. His research goal is to advance human-centered AI by engineering human-level machine social intelligence to build socially intelligent systems that can understand, reason about, and interact with humans in real-world settings. His work received the 2017 Cognitive Science Society Computational Modeling Prize in Perception/Action and several best paper awards at NeurIPS workshops and an IROS workshop. His research has also been covered by multiple media outlets, such as New Scientist, Science News, and VentureBeat. He received his PhD degree from the University of California, Los Angeles, in 2019.

View the recording >> 

Distance-Estimation in Modern Graphs: Algorithms and Impossibility”

Abstract: The size and complexity of today’s graphs present challenges that necessitate the discovery of new algorithms. One central area of research in this endeavor is computing and estimating distances in graphs. In this talk I will discuss two fundamental families of distance problems in the context of modern graphs: Diameter/Radius/Eccentricities and Hopsets/Shortcut Sets.

The best known algorithm for computing the diameter (largest distance) of a graph is the naive algorithm of computing all-pairs shortest paths and returning the largest distance. Unfortunately, this can be prohibitively slow for massive graphs. Thus, it is important to understand how fast and how accurately the diameter of a graph can be approximated. I will present tight bounds for this problem via conditional lower bounds from fine-grained complexity.

Secondly, for a number of settings relevant to modern graphs (e.g. parallel algorithms, streaming algorithms, dynamic algorithms), distance computation is more efficient when the input graph has low hop-diameter. Thus, a useful preprocessing step is to add a set of edges (a hopset) to the graph that reduces the hop-diameter of the graph, while preserving important distance information. I will present progress on upper and lower bounds for hopsets.

Bio: Nicole Wein is a Simons Postdoctoral Leader at DIMACS at Rutgers University. Previously, she obtained her Ph.D. from MIT advised by Virginia Vassilevska Williams. She is a theoretical computer scientist and her research interests include graph algorithms and lower bounds including in the areas of distance-estimation algorithms, dynamic algorithms, and fine-grained complexity.

Title and Abstract Coming soon

Title and Abstract Coming soon

Title and Abstract Coming soon

Title and Abstract Coming soon

Archive

From the calendar years 1997-2022.

SPRING

SUMMER

FALL

SPRING

SUMMER

FALL

SPRING

FALL