Spring 2022

View the recording >>

Institute for Assured Autonomy & Computer Science Seminar Series

January 13, 2022

Abstract: Machine learning algorithms are everywhere, ranging from simple data analysis and pattern recognition tools used across the sciences to complex systems that achieve superhuman performance on various tasks. Ensuring that they are safe—that they do not, for example, cause harm to humans or act in a racist or sexist way—is therefore not a hypothetical problem to be dealt with in the future, but a pressing one that we can and should address now. In this talk, Phil Thomas will discuss some of his recent efforts to develop safe machine learning algorithms—and particularly safe reinforcement learning algorithms, which can be responsibly applied to high-risk applications. He will focus on the article “Preventing Undesirable Behavior of Intelligent Machines” recently published in Science, describing its contributions, subsequent extensions, and important areas of future work.

Speaker Biography: Phil Thomas is an assistant professor at the University of Massachusetts. He received his PhD from UMass in 2015 under the supervision of Andy Barto, after which he worked as a postdoctoral research fellow at Carnegie Mellon University for two years under the supervision of Emma Brunskill before returning to UMass. Thomas’ research focuses on creating machine learning algorithms—particularly reinforcement learning algorithms, which provide high-probability guarantees of safety and fairness. He emphasizes that these algorithms are often applied by people who are experts in their own fields, but who may not be experts in machine learning and statistics, and so the algorithms must be easy to apply responsibly. His notable accomplishments include the publication of a paper on this topic in Science titled “Preventing Undesirable Behavior of Intelligent Machines” and testifying on this topic to the U.S. House of Representatives Taskforce on Artificial Intelligence at the Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services hearing.

Video Recording >>

CS Seminar Series

January 25, 2022

Medical imaging is a common diagnostic and research tool for application to diseases in the human body. The advent of advanced machine learning (ML) and neural network techniques has opened the possibility of learning more from imaging that has been previously possible. Beyond standard classification and segmentation applications of neural networks in imaging, there are questions about how sure we can be of the output of a neural network which is viewed as a black box. This talk will highlight work with three areas within the SOM all focused on medical imaging research. Neural network segmentation of images remains a primary application of AI algorithms, but the information returned by neural networks may be in question by the medical community (and AI researchers, too). A method of quantifying the uncertainty in segmented images, will be shown, and then discuss applying a modification to a sequential learning algorithm. Further, preliminary results in image artifact detection will be shown on OCT angiography images for large scale processing of OCTA images. The application of the AI/ML techniques are still at an infancy in medical imaging and new questions are being asked about how to apply simple and advanced AI in medical imaging for understanding disease progression.

Speaker Biography: Dr. Craig Jones earned a BSc (Hon) in Computer Science and Mathematics, an MSc in Medical Biophysics, and a PhD in Physics all with focus in image processing and numerical optimization algorithms on medical images. Dr. Jones completed a postdoctoral fellowship at KKI in the Kirby Center for Functional Brain Imaging doing numerical optimization and image processing work on advanced MRI acquisitions in collaboration with neurologists and neuroradiologists. He returned to Canada for a few years and worked at the Robarts Research Institute (UWO) in London, Ontario and looked at the quantification of uptake of agents in animal images based on numerical fitting of curves. He moved to a data science company, Spry, in Baltimore and applied machine learning / data science techniques for business applications. Imaging was always his interest and so he moved to the Space Telescope Science Institute in Baltimore and worked on a team that created imaging algorithms for creation, fixing and interpreting images from the James Webb Space Telescope. Then several years ago he returned to Johns Hopkins and has been working in the Malone Center for Healthcare in Engineering collaborating with numerous medical doctors in multiple departments within SOM. His interest is in AI/ML numerical optimization and image processing techniques applied to medical images.

Video Recording >>

CLSP & CS Seminar Series

January 31, 2022

Natural language processing has been revolutionized by neural networks, which perform impressively well in applications such as machine translation and question answering. Despite their success, neural networks still have some substantial shortcomings: Their internal workings are poorly understood, and they are notoriously brittle, failing on example types that are rare in their training data. In this talk, I will use the unifying thread of hierarchical syntactic structure to discuss approaches for addressing these shortcomings. First, I will argue for a new evaluation paradigm based on targeted, hypothesis-driven tests that better illuminate what models have learned; using this paradigm, I will show that even state-of-the-art models sometimes fail to recognize the hierarchical structure of language (e.g., to conclude that “The book on the table is blue” implies “The table is blue.”) Second, I will show how these behavioral failings can be explained through analysis of models’ inductive biases and internal representations, focusing on the puzzle of how neural networks represent discrete symbolic structure in continuous vector space. I will close by showing how insights from these analyses can be used to make models more robust through approaches based on meta-learning, structured architectures, and data augmentation.

Speaker Biography: Tom McCoy is a PhD candidate in the Department of Cognitive Science at Johns Hopkins University. As an undergraduate, he studied computational linguistics at Yale. His research combines natural language processing, cognitive science, and machine learning to study how we can achieve robust generalization in models of language, as this remains one of the main areas where current AI systems fall short. In particular, he focuses on inductive biases and representations of linguistic structure, since these are two of the major components that determine how learners generalize to novel types of input.

View the recording >>

Institute for Assured Autonomy & Computer Science Seminar Series

February 3, 2022

Abstract: Let us consider a difficult computer vision challenge: Would you want an algorithm to determine whether you should get a biopsy, based on an X-ray? That’s usually a decision made by a radiologist, based on years of training. We know that algorithms haven’t worked perfectly for a multitude of other computer vision applications, and biopsy decisions are harder than just about any other application of computer vision that we typically consider. The interesting question is whether it is possible that an algorithm could be a true partner to a physician, rather than making the decision on its own. To do this, at the very least, we would need an interpretable neural network that is as accurate as its black box counterparts. In this talk, Cynthia Rudin will discuss two approaches to interpretable neural networks: (1) case-based reasoning, where parts of images are compared to other parts of prototypical images for each class, and (2) neural disentanglement using a technique called concept whitening. The case-based reasoning technique is strictly better than saliency maps, and the concept whitening technique provides a strict advantage over the post-hoc use of concept vectors. She will discuss the following papers: “This Looks Like That: Deep Learning for Interpretable Image Recognition,” Conference and Workshop on Neural Information Processing Systems spotlight, 2019; “IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography,” 2021;” “Concept Whitening for Interpretable Image Recognition,” Nature Machine Intelligence, 2020; “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead,” Nature Machine Intelligence, 2019; and “Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges,” 2021.

Speaker Biography: Cynthia Rudin is a professor of computer science, electrical and computer engineering, statistical science, mathematics, and biostatistics and bioinformatics at Duke University. She directs the Interpretable Machine Learning Lab, whose goal is to design predictive models with reasoning processes that are understandable to humans. Her lab applies machine learning in many areas, such as health care, criminal justice, and energy reliability. She holds an undergraduate degree from the University at Buffalo and a PhD from Princeton University. Rudin is the recipient of the 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (the “Nobel Prize of AI”). She is a fellow of the American Statistical Association, the Institute of Mathematical Statistics, and the Association for the Advancement of Artificial Intelligence. Her work has been featured in many news outlets, including The New York Times, The Washington Post, The Wall Street Journal, and The Boston Globe.

Video Recording >>

CLSP & CS Seminar Series

February 14, 2022

As humans, our understanding of language is grounded in a rich mental model about “how the world works” – that we learn through perception and interaction. We use this understanding to reason beyond what we literally observe or read, imagining how situations might unfold in the world. Machines today struggle at this kind of reasoning, which limits how they can communicate with humans.

In my talk, I will discuss three lines of work to bridge this gap between machines and humans. I will first discuss how we might measure grounded understanding. I will introduce a suite of approaches for constructing benchmarks, using machines in the loop to filter out spurious biases. Next, I will introduce PIGLeT: a model that learns physical commonsense understanding by interacting with the world through simulation, using this knowledge to ground language. From an English-language description of an event, PIGLeT can anticipate how the world state might change – outperforming text-only models that are orders of magnitude larger. Finally, I will introduce MERLOT, which learns about situations in the world by watching millions of YouTube videos with transcribed speech. Through training objectives inspired by the developmental psychology idea of multimodal reentry, MERLOT learns to fuse language, vision, and sound together into powerful representations.

Together, these directions suggest a path forward for building machines that learn language rooted in the world.

Speaker Biography: Rowan Zellers is a final year PhD candidate at the University of Washington in Computer Science & Engineering, advised by Yejin Choi and Ali Farhadi. His research focuses on enabling machines to understand language, vision, sound, and the world beyond these modalities. He has been recognized through an NSF Graduate Fellowship and a NeurIPS 2021 outstanding paper award. His work has appeared in several media outlets, including Wired, the Washington Post, and the New York Times. In the past, he graduated from Harvey Mudd College with a B.S. in Computer Science & Mathematics, and has interned at the Allen Institute for AI.

Video Recording >>

CLSP & CS Seminar Series

February 18, 2022

As AI-driven language interfaces (such as chat-bots) become more integrated into our lives, they need to become more versatile and reliable in their communication with human users. How can we make progress toward building more “general” models that are capable of understanding a broader spectrum of language commands, given practical constraints such as the limited availability of labeled data?

In this talk, I will describe my research toward addressing this question along two dimensions of generality. First I will discuss progress in “breadth” — models that address a wider variety of tasks and abilities, drawing inspiration from existing statistical learning techniques such as multi-task learning. In particular, I will showcase a system that works well on several QA benchmarks, resulting in state-of-the-art results on 10 benchmarks. Furthermore, I will show its extension to tasks beyond QA (such as text generation or classification) that can be “defined” via natural language. In the second part, I will focus on progress in “depth” — models that can handle complex inputs such as compositional questions. I will introduce Text Modular Networks, a general framework that casts problem-solving as natural language communication among simpler “modules.” Applying this framework to compositional questions by leveraging discrete optimization and existing non-compositional closed-box QA models results in a model with strong empirical performance on multiple complex QA benchmarks while providing human-readable reasoning.

I will conclude with future research directions toward broader NLP systems by addressing the limitations of the presented ideas and other missing elements needed to move toward more general-purpose interactive language understanding systems.

Speaker Biography: Daniel Khashabi is a postdoctoral researcher at the Allen Institute for Artificial Intelligence (AI2), Seattle. Previously, he completed his Ph.D. in Computer and Information Sciences at the University of Pennsylvania in 2019. His interests lie at the intersection of artificial intelligence and natural language processing, with a vision toward more general systems through unified algorithms and theories.

CLSP & CS Seminar Series

February 21, 2022

While rapid advances in technology and data availability have greatly increased the practical usability of natural language processing (NLP) models, current failures to center people in NLP research has contributed to an ethical crisis: models are liable to amplify stereotypes, spread misinformation, and perpetuate discrimination. These potential harms are difficult to identify and mitigate in data and models, because they are often subjective, subtle, dependent on social context, and cannot be reduced to supervised classification tasks. In this talk, I will discuss two projects focused on developing distantly-supervised NLP models to detect and mitigate these potential harms in text. The first exposes subtle media manipulation strategies in a state-influenced Russian newspaper by comparing media coverage with economic indicators, combining algorithms for processing text and economic data with frameworks from political science. The second develops a model to identify systemic differences in social media comments addressed towards men and women by training a model to predict the gender of the addressee and incorporating propensity matching and adversarial training to surface subtle features indicative of bias. This approach allows us to identify comments likely to contain bias without needing explicit bias annotations. Overall, my work aims to develop NLP models that facilitate text processing in diverse hard-to-annotate settings, provide insights into social-oriented questions, and advance the equity and fairness of NLP systems.

Speaker Biography: Anjalie Field is a PhD candidate at the Language Technologies Institute at Carnegie Mellon University and a visiting student at the University of Washington, where she is advised by Yulia Tsvtekov. Her work focuses on social-oriented natural language processing, specifically identifying and mitigating potential harms in text and text processing systems. This interdisciplinary work involves developing machine learning models to examine social issues like propaganda, stereotypes, and prejudice in complex real-world data sets, as well as exploring their amplification and ethical impacts in AI systems. Anjalie has published her work in NLP and interdisciplinary conferences, receiving a nomination for best paper at SocInfo 2020, and she is also the recipient of a NSF graduate research fellowship and a Google PhD fellowship. Prior to graduate school, Anjalie received her undergraduate degree in computer science, with minors in Latin and ancient Greek, from Princeton University.

Video Recording >>

Association for Computing Machinery Lecture in Memory of Nathan Krasnopoler

February 22, 2022

We’re in the middle of the most significant change to work practices that we’re likely to see in our lifetimes. For the past several millennia, space has been the primary technology people have used to get things done. The coming Hybrid Work Era, however, will be shaped by digital technology. In this talk I will give an overview of what research tells us about emerging work practices following the rapid move to remote and hybrid in March 2020, and discuss the opportunity ahead of us to intentionally revisit how the key productivity technologies of space and software interact so as to create a new – and better – future of work. http://aka.ms/nfw

Speaker Biography: Jaime Teevan is Chief Scientist and Technical Fellow at Microsoft, where she is responsible for driving research-backed innovation in the company’s core products. Jaime is an advocate for finding smarter ways for people to make the most of their time, and believes in the positive impact that breaks and recovery have on productivity. She leads Microsoft’s future of work initiative which brings researchers from Microsoft, LinkedIn and GitHub together to study how the pandemic has changed the way people work. Previously she was Technical Advisor to CEO Satya Nadella and led the Productivity team at Microsoft Research. http://teevan.org

Video Recording >>

CS Seminar Series

February 22, 2022

Many societal issues, such as health care, voting, etc., require decision-makers to study their stakeholders to design interventions or make a policy change. How do we conduct robust, generalizable, and engaging studies about human behavior? In this talk, I will share my vision on the role of AI in the quest of understanding humans and how could we approach such a future. I will introduce my work on designing and building conversational AI to conduct engaging surveys and collect high-quality information. I will first demonstrate the effectiveness of conversational AIs in transforming online survey experiences through a field study. Then, I will present a human-in-the-loop framework to create more effective interview chatbots with active listening skills. In the end, I will talk about my future research perspectives on designing and developing human-centered AI to understand humans for social change.

Speaker Biography: Ziang Xiao is a Ph.D. candidate in Computer Science at the University of Illinois Urbana-Champaign, advised by Prof. Hari Sundaram and Prof. Karrie Karahalios. He completed his B.S. in Psychology and Statistics & Computer Science at the University of Illinois Urbana-Champaign. His research lies at the intersection of human-computer interaction, natural language processing, and social psychology. The goal of his research is to enhance human-AI interactions to expand our understanding of human behavior. His work created engaging conversational agents to collect high-quality information through survey interviews. Ziang Xiao has published multiple papers in top-tier conferences and journals, including CHI, TOCHI, CSCW, IUI, etc.

Video Recording >>

CLSP & CS Seminar Series

February 28, 2022

Since it is increasingly harder to opt out from interacting with AI technology, people demand that AI is capable of maintaining contracts such that it supports agency and oversight of people who are required to use it or who are affected by it. To help those people create a mental model about how to interact with AI systems, I extend the underlying models to self-explain—predict the label/answer and explain this prediction. In this talk, I will present how to generate (1) free-text explanations given in plain English that immediately tell users the gist of the reasoning, and (2) contrastive explanations that help users understand how they could change the text to get another label.

Speaker Biography: Ana Marasović is a postdoctoral researcher at the Allen Institute for AI (AI2) and the Paul G. Allen School of Computer Science & Engineering at University of Washington. Her research interests broadly lie in the fields of natural language processing, explainable AI, and vision-and-language learning. Her projects are motivated by a unified goal: improve interaction and control of the NLP systems to help people make these systems do what they want with the confidence that they’re getting exactly what they need. Prior to joining AI2, Ana obtained her PhD from Heidelberg University.

Computer Science Seminar Series

March 1, 2022

Abstract: Machine learning has demonstrated great promise in scientific discovery, health care, and education, especially with the rise of large neural networks. However, large models trained on complex and rapidly growing data consume enormous computational resources. In this talk, Beidi Chen will describe her work on exploiting model sparsity with randomized algorithms to accelerate large ML systems on current hardware. She will begin by describing SLIDE, an open-source system for efficient sparse neural network training on CPUs that has been deployed by major technology companies and academic labs. It blends locality-sensitive hashing with multi-core parallelism and workload optimization to drastically reduce computations. SLIDE trains industry-scale recommendation models on a 44 core CPU 3.5x faster than TensorFlow on V100 GPU. Next, she will present Pixelated Butterfly, a simple yet efficient sparse training framework on GPUs. It uses a simple static block-sparse pattern based on butterfly and low-rank matrices, taking into account GPU block-oriented efficiency. Pixelated Butterfly trains up to 2.5x faster (wall-clock) than its dense vision transformer and GPT-2 counterparts with no drop in accuracy. Chen will conclude by outlining future research directions for further accelerating ML pipelines and making ML more accessible to the general community, such as software-hardware co-design and sparse models for scientific computing and medical imaging.

Speaker Biography: Beidi Chen is a postdoctoral scholar in the Computer Science Department at Stanford University, working with Christopher Ré. Her research focuses on large-scale machine learning and deep learning. Specifically, Chen designs and optimizes randomized algorithms (algorithm-hardware co-design) to accelerate large machine learning systems for real-world problems. Prior to joining Stanford, she received her PhD in computer science from Rice University, where she was advised by Anshumali Shrivastava. She received a BS in electrical engineering and computer science from the University of California, Berkeley in 2015. Chen has interned at Microsoft Research, NVIDIA Research, and Amazon AI. Her work has won Best Paper Awards at the Large Installation System Administration Conference and the International Conference on Intelligent and Interactive Systems and Applications. Chen was selected as a Rising Star in Electrical Engineering and Computer Science by the Massachusetts Institute of Technology and the University of Illinois Urbana-Champaign.

Video Recording >>

CS Seminar Series

March 3, 2022

Prediction models should know what they do not know if they are to be trusted for making important decisions. Prediction models would accurately capture their uncertainty if they could predict the true probability of the outcome of interest, such as the true probability of a patient’s illness given the symptoms. While outputting these probabilities exactly is impossible in most cases, I show that it is surprisingly possible to learn probabilities that are “indistinguishable” from the true probabilities for large classes of decision making tasks. I propose algorithms to learn indistinguishable probabilities, and show that they provably enable accurate risk assessment and better decision outcomes. In addition to learning probabilities that capture uncertainty, my talk will also discuss how to acquire information to reduce uncertainty in ways that optimally improve decision making. Empirically, these methods lead to prediction models that enable better and more confident decision making in applications such as medical diagnosis and policy making.

Speaker Biography: Shengjia is a PhD candidate at the Department of Computer Science at Stanford University. His research interests include probabilistic deep learning, uncertainty quantification, experimental design, and ML for science.

Video Recording >>

CS Seminar Series

March 8, 2022

Widely used technologies that support remote collaboration and content production (e.g., Microsoft Office, Google Docs, Zoom) contribute to ongoing issues of inequity for people with disabilities. These tools do not always allow for the same level of usability and efficiency for disabled people as their able-bodied peers experience. As workplaces and educational institutes continue to adopt more technology-driven, hybrid models during the pandemic, existing equity gaps are likely to increase without a holistic understanding of accessibility in content production and new tools and techniques to support accessible collaboration. My research addresses this challenge by understanding, designing, and building accessible collaborative content production systems for ability-diverse teams, i.e., teams involving people with and without disabilities. In this talk, I will overview two main directions I am pursuing to enhance collaboration among blind and sighted people: collaborative writing and collaborative making.

First, drawing upon my interviews and observations with blind academics and professionals, I will explain the technological, social, and organizational factors that shape accessibility in collaborative writing. Then I will demonstrate a variety of auditory techniques and systems I developed to represent complex collaboration information in a shared document (e.g., comments, tracked changes, and real-time edits) and how these new techniques support blind writers in maintaining collaboration awareness and coordinating joint activities in asynchronous and synchronous settings.

Next, I will focus on my long-term ethnographic research within a community weaving studio where blind fiber artists work together with sighted instructors to produce hand-woven fabrics. I will share two examples of how I integrate technological augmentations in this traditional form of making to support creative work of blind weavers. These include designing an audio-enhanced physical loom and an accessible tool for generating fabric patterns. I will conclude by discussing my future research plans on improving accessibility in collaboration, creativity, and learning.

Speaker Biography: Maitraye Das is a PhD candidate in Technology and Social Behavior, a joint doctoral program in Computer Science and Communication at Northwestern University. Her research sits at the intersection of Human-Computer Interaction (HCI), Computer-Supported Cooperative Work, and Accessible Computing, with a particular focus on studying and designing for accessible collaborative content creation in ability-diverse teams. Maitraye has published in premier HCI venues including ACM’s CHI, CSCW, ASSETS, TOCHI, and TACCESS. Her work has been recognized with two Best Paper Awards, three Best Paper Honorable Mentions, and a Diversity and Inclusion Award at top conferences including CHI and CSCW. She has also received a CS PhD Student Research Award and two research grants from Northwestern University. In 2021, Maitraye was selected as a Rising Star in EECS by Massachusetts Institute of Technology.

Video Recording >>

CS Seminar Series

March 10, 2022

Digital technologies are evolving with advanced capabilities. To function, these technologies rely on collecting and processing various types of sensitive data from their users. These data practices could expose users to a wide array of security and privacy risks. My research at the intersection of security, privacy, and human-computer interaction aims to help all people have safer interactions with digital technologies. In this talk, I will share results on people’s security and privacy preferences and attitudes toward technologies such as smart devices and remote communication tools. I will then describe a security and privacy transparency tool that I designed and evaluated to address consumers’ needs when purchasing and interacting with smart devices. I will end my talk by discussing emerging and future directions for my research to design equitable security and privacy tools and policies by studying and designing for the needs of diverse populations.

Speaker Biography: Pardis Emami-Naeini is a postdoctoral researcher in the Security and Privacy Research Lab at the University of Washington. Her research is broadly at the intersection of security and privacy, usability, and human-computer interaction. Her work has been published at flagship venues in security (IEEE S&P, SOUPS) and human-computer interaction and social sciences (CHI, CSCW) and covered by multiple outlets, including Wired and the Wall Street Journal. Her research has informed the National Institute of Standards and Technology (NIST), Consumer Reports, and World Economic Forum in their efforts toward designing usable and informative security and privacy labels for smart devices. Pardis received her B.Sc. degree in computer engineering from Sharif University of Technology in 2015 and her M.Sc. and Ph.D. degrees in computer science from Carnegie Mellon University in 2018 and 2020, respectively. She was selected as a Rising Star in electrical engineering and computer science in October 2019 and was awarded the 2019-2020 CMU CyLab Presidential Fellowship.

Video Recording >>

CLSP & CS Seminar Series

March 14, 2022

Systems that support expressive, situated natural language interactions are essential for expanding access to complex computing systems, such as robots and databases, to non-experts. Reasoning and learning in such natural language interactions is a challenging open problem. For example, resolving sentence meaning requires reasoning not only about word meaning, but also about the interaction context, including the history of the interaction and the situated environment. In addition, the sequential dynamics that arise between user and system in and across interactions make learning from static data, i.e., supervised data, both challenging and ineffective. However, these same interaction dynamics result in ample opportunities for learning from implicit and explicit feedback that arises naturally in the interaction. This lays the foundation for systems that continually learn, improve, and adapt their language use through interaction, without additional annotation effort. In this talk, I will focus on these challenges and opportunities. First, I will describe our work on modeling dependencies between language meaning and interaction context when mapping natural language in interaction to executable code. In the second part of the talk, I will describe our work on language understanding and generation in collaborative interactions, focusing on continual learning from explicit and implicit user feedback.

Speaker Biography: Alane Suhr is a PhD Candidate in the Department of Computer Science at Cornell University, advised by Yoav Artzi. Her research spans natural language processing, machine learning, and computer vision, with a focus on building systems that participate and continually learn in situated natural language interactions with human users. Alane’s work has been recognized by paper awards at ACL and NAACL, and has been supported by fellowships and grants, including an NSF Graduate Research Fellowship, a Facebook PhD Fellowship, and research awards from AI2, ParlAI, and AWS. Alane has also co-organized multiple workshops and tutorials appearing at NeurIPS, EMNLP, NAACL, and ACL. Previously, Alane received a BS in Computer Science and Engineering as an Eminence Fellow at the Ohio State University.

View the recording >>

Institute of Assured Autonomy & Computer Science Seminar Series

March 17, 2022

Abstract: Simultaneous localization and mapping (SLAM) is the process of constructing a global model from local observations, acquired as a mobile robot moves through an environment. SLAM is a foundational capability for mobile robots, supporting such core functions as planning, navigation, and control for a wide range of application domains. SLAM is one of the most deeply investigated fields in mobile robotics research, yet many open questions remain to enable the realization of robust, long-term autonomy. This talk will review the historical development of SLAM and will describe several current research projects in John Leonard’s group. Two key themes are increasing the expressive capacity of the environmental models used in SLAM systems (representation) and improving the performance of the algorithms used to estimate these models from data (inference). Leonard’s ultimate goal is to provide autonomous robots with a more comprehensive understanding of the world, facilitating lifelong learning in complex dynamic environments.

Video Recording >>

CS Seminar Series

March 22, 2022

Currently, machine learning (ML) systems have impressive performance but can behave in unexpected ways. These systems latch onto unintuitive patterns and are easily compromised, a source of grave concern for deployed ML in settings such as healthcare, security, and autonomous driving. In this talk, I will discuss how we can redesign the core ML pipeline to create reliable systems. First, I will show how to train provably robust models, which enables formal robustness guarantees for complex deep networks. Next, I will demonstrate how to make ML models more debuggable. This amplifies our ability to diagnose failure modes, such as hidden biases or spurious correlations. To conclude, I will discuss how we can build upon this “reliability stack” to enable broader robustness requirements, and develop new primitives that make ML debuggable by design.

Speaker Biography: Eric Wong is a postdoctoral researcher in the Computer Science and Artificial Intelligence Laboratory at Massachusetts Institute of Technology. His research focuses on the foundations for reliable systems: methods that allow us to diagnose, create, and verify robust systems. He is a 2020 Siebel Scholar and received an honorable mention for his thesis on the robustness of deep networks to adversarial examples at Carnegie Mellon University.

Video Recording >>

CS Seminar Series

March 29, 2022

Automated decision-making systems are increasingly being deployed in areas with personal and societal impacts, leading to growing interest and need for AI and ML systems that are robust, explainable, fair, and so on. It is important to note that these guarantees only hold with respect to a certain model of the world, with inherent uncertainties. In this talk, I will present how probabilistic modeling and reasoning, by incorporating a distribution, offer a principled way to handle different kinds of uncertainties when learning and deploying trustworthy AI systems. For example, when learning classifiers, the labels in the training data may be biased; I will show that probabilistic circuits, a family of tractable probabilistic models, can be effective in enforcing and auditing fairness properties by explicitly modeling a latent unbiased label. In addition, I will also discuss recent breakthroughs in tractable inference of more complex queries including information-theoretic quantities, to demonstrate the full potential of probabilistic reasoning. Finally, I will conclude with my future work towards a framework to more flexibly reason about and enforce trustworthy AI/ML system behaviors.

Speaker Biography: YooJung Choi is a Ph.D. candidate in Computer Science at the University of California, Los Angeles, advised by Guy Van den Broeck. Her research is broadly in the areas of artificial intelligence and machine learning, with focus on probabilistic modeling and reasoning for automated decision-making. In particular, she is interested in theory and algorithms for tractable probabilistic inference and applying these results to address fairness, robustness, explainability, and in general aim towards trustworthy AI/ML. YooJung is a recipient of a UCLA fellowship in 2021-2022, and was selected for the Rising Stars in EECS workshop in 2020.

Video Recording >>

CS Seminar Series

March 31, 2022

Security and privacy research has led to major successes in improving the baseline level of digital security for the general population. Nevertheless, privacy and security tools and strategies are not equally effective for everyone—many high-risk communities face security, privacy, and safety risks that are not well addressed by current solutions. My work uses an interdisciplinary approach to investigate the digital safety needs and challenges for high-risk users, quantify the impact of government regulation and corporate policy on safety, and inform the design of technical and procedural interventions that support safety for all.

In this talk, I will discuss two studies in detail that showcase the opportunities of taking an interdisciplinary approach to supporting digital safety for high-risk communities such as sex workers, undocumented immigrants, and survivors of intimate partner violence. First, I will discuss findings from an in-depth qualitative interview study on the security needs and practices of sex workers in Europe, highlighting their safety needs as well as technical and policy challenges that impede their safety. Then, I will describe a large-scale global measurement study on geoblocking, which reveals corporate and legal policies that are contributing to the fragmentation of Internet access worldwide. I will further provide an overview of my future research agenda, which will leverage both qualitative and quantitative methods to inform policy and technical design.

Speaker Biography: Allison McDonald is a computer science PhD candidate at the University of Michigan and a Research Fellow at the Center on Privacy & Technology at Georgetown Law. Her research interests lie in the intersection of security, privacy, and human-computer interaction, with a particular emphasis on how technology exacerbates marginalization and impacts digital safety. Allison has been supported by a Facebook Fellowship and a Rackham Merit Fellowship, and her work has been recognized with Best Paper Awards at the USENIX Security Symposium, IEEE Security & Privacy Symposium, and the ACM Conference on Human Factors in Computing Systems (CHI). Before beginning her PhD, Allison was a Roger M. Jones fellow at the European University Viadrina studying international human rights and humanitarian law. She has a BSE in computer science and a BS in German from the University of Michigan.

Video Recording >>

Gerald M. Masson Distinguished Lecture Series

April 14, 2022

The nexus of advances in robotics, NLU, and machine learning has created opportunities for personalized robots toward supporting human activities in daily life. The current pandemic has both caused and exposed unprecedented levels of health & wellness, education, and training needs worldwide, which must increasingly be addressed in homes. Socially assistive robotics has the potential to address those and longer-standing care needs through personalized and affordable in-home support.

This talk will discuss human-robot interaction methods for socially assistive robotics that utilize multi-modal interaction data and expressive and persuasive robot behavior to monitor, coach, and motivate users to engage in health, wellness, education and training activities. Methods and results will be presented that include modeling, learning, and personalizing user motivation, engagement, and coaching of healthy children and adults, stroke patients, Alzheimer’s patients, and children with autism spectrum disorders, in short and long-term (month+) deployments in schools, therapy centers, and homes. Research and commercial implications and pathways will be discussed.

Speaker Biography: Maja Matarić is Chan Soon-Shiong Distinguished Professor of Computer Science, Neuroscience, and Pediatrics at USC, founding director of the Robotics and Autonomous Systems Center and interim Vice President of Research. Her PhD and MS are from MIT, BS from the University of Kansas. She is Fellow of AAAS, IEEE, AAAI, and ACM, recipient of the US Presidential Award for Excellence in Science, Mathematics & Engineering Mentoring from President Obama, Anita Borg Institute Women of Vision, NSF Career, MIT TR35 Innovation, and IEEE RAS Early Career Awards. She is active in K-12 and diversity outreach. A pioneer of socially assistive robotics, her lab’s research is developing personalized human-robot interaction methods for convalescence, rehabilitation, training, and education that have been validated in autism, stroke, Alzheimer’s, and other domains. She is also co-founder of Embodied, Inc.

View the recording >>

Institute of Assured Autonomy & Computer Science Seminar Series

April 19, 2022

Abstract: In recent times, we often hear a call for the governance of AI systems, but what does that really mean? In this talk, Kush R. Varshney will first adopt a control theory perspective to explain governance that determines the reference input via value alignment, data scientists acting as the controller to meet the values in a machine learning system, and facts captured in transparent documentation as the feedback signal. He will later go into further depth on value alignment via CP-nets and performance metric elicitation, as well as AI testing and transparency via factsheets. He will conclude by adopting a nursing theory perspective to explain how the control theory perspective lacks caring and the need for a carative approach that starts with the real-world problem as experienced by the most vulnerable people.

Speaker Biography: Kush R. Varshney was born in Syracuse, New York in 1982. He received his BS (magna cum laude) in electrical and computer engineering with honors from Cornell University in 2004. He received his SM in 2006 and PhD in 2010, both in electrical engineering and computer science, from the Massachusetts Institute of Technology. While at MIT, he was an NSF Graduate Research Fellow. Varshney is a distinguished research staff member and manager with IBM Research at the Thomas J. Watson Research Center in Yorktown Heights, New York, where he leads the machine learning group in the Foundations of Trustworthy AI Department. Varshney was a visiting scientist at IBM Research – Africa in Nairobi, Kenya in 2019. He is the founding co-director of the IBM Science for Social Good initiative. Varshney applies data science and predictive analytics to human capital management, health care, olfaction, computational creativity, public affairs, international development, and algorithmic fairness, which has led to recognitions such as the 2013 Gerstner Award for Client Excellence for contributions to the WellPoint team, the Extraordinary IBM Research Technical Accomplishment for contributions to workforce innovation and enterprise transformation, and the Harvard Belfer Center Tech Spotlight runner-up for AI Fairness 360. Varshney conducts academic research on the theory and methods of trustworthy machine learning. His work has been recognized through Best Paper Awards at Fusion 2009, the 2013 Institute of Electrical and Electronics Engineers (IEEE) International Conference on Service Operations and Logistics, and Informatics 2013, the 2014 ACM Special Interest Group on Knowledge Discovery and Data Mining conference, the 2015 Society for Industrial and Applied Mathematics International Conference on Data Mining, and the 2019 Computing Community Consortium/Schmidt Futures Computer Science for Social Good White Paper Competition. He self-published a book entitled Trustworthy Machine Learning in 2021 and is a senior member of the IEEE.

Video Recording >>

CS Seminar Series

April 26, 2022

Let’s pursue algorithms with the view of an architect challenged by a major project who is encouraged to create a design that is captivating and memorable for the ages.

For practical “input” we seek compressed data structures providing efficient access to critical operands in formats allowing direct use by the most important operations.

For “output” we must generate visual presentations conveying results that capture audience attention and retention. Consider we must compete with the high budget commercials blanketing the multitude of streaming services.

For “well defined” we seek procedures expressible in a programming language that is efficiently compilable exploiting available computing engines.

For “finite” we must utilize details of the computing engine. For current electronic systems, we seek alternative approaches exploiting the hardware architecture of arithmetic at the binary level. For combinatorial problems the hardware realization of discrete structure operations can provide breakthroughs. Advice: The convenience of employing packages for subproblems is likely bested by AI designs.

We present numerous problems we have used challenging algorithm engineering students to demonstrate their own creative skills in fashioning new procedures. While many of these problems seem irrelevant, their somewhat hidden properties are shown to foster new approaches. In the classroom experience, these problems have identified the most innately creative students.

Examples we shall cover — involving manipulation of data structures, exploiting algorithms in hardware, and reverse engineering — arise from the following challenges, each with their own unique story:

  1. Design a data structure for planar graphs that supports both these specifications:
    • are two particular nodes adjacent?, resolved in constant time,
    • provide access to all adjacencies of any given node, resolved in time linear in the number of adjacencies.
  2. Design an algorithm linear in the number of references to a random number generator that supports placing millions or more random points on the surface of a sphere in competitively superior time using currently available arithmetic hardware operations.
  3. Which year in the last thousand years has the longest Roman Numeral representation?

Speaker Biography: David W. Matula is professor emeritus of computer science at Southern Methodist University, arriving as Chair (1974-1979) and having served as a professor there since 1974. He received the Ph.D. (’66) in engineering science (Operations Research) from the University of California, Berkeley, following a B.S. (’59) in engineering physics from Washington University, St. Louis. He was named the inaugural Cruse C. and Marjorie F. Calahan Centennial Chair in Engineering in 2016.

His research focuses on the foundations and applications of algorithm engineering with specific emphasis on computer arithmetic and graph/network algorithms. Two-thirds of his publications focus on computer arithmetic and have appeared primarily in the IEEE and computer science literature. He co-authored the research-oriented text Finite Precision Number Systems and Arithmetic published by Cambridge Press, and also holds some 20 patents. His computer arithmetic research has been supported for several decades by federal, state, and corporate agencies, including NSF, Texas ATP, T.I., Cyrix, and the Semiconductor Research Corporation. Professor Matula’s publications on graph algorithms have appeared in a variety of mathematical and scientific journals including J. Chem. Physics, J. Am. Chem. Soc., Comp. and Biomedical Res., and Geographical Analysis.

IAA & CS Seminar Series

May 20, 2022

Biases are a major source of harm in our world, and it is now widely recognized that the use of algorithms and AI can maintain, exacerbate, and even create these social, structural, and psychological biases. In response, there have been many proposed measures of bias, aspirational principles, and even proposed regulations and policies to eliminate (algorithmic) biases. I will argue, though, that most of these responses fail to address the core ethical and societal challenges of bias; at best, they provide very noisy guides that might sometimes be helpful. After surveying these issues, I will offer a diagnosis: our solutions have focused on only one of either technical or policy responses, rather than joint technical-policy solutions informed by domain expertise. I will provide an example of such a joint solution based on our recent work that integrates bias discovery (technical) with mechanism knowledge (domain expertise) to identify potential responses (policy). While this approach also has flaws, it is better able to identify both sources of problematic bias and potential mitigation actions.

Speaker Biography: David Danks is Professor of Data Science & Philosophy and affiliate faculty in Computer Science & Engineering at University of California, San Diego. His research interests range widely across philosophy, cognitive science, and machine learning, including their intersection. Danks has examined the ethical, psychological, and policy issues around AI and robotics in multiple sectors, including transportation, healthcare, privacy, and security. He has also done significant research in computational cognitive science and developed multiple novel causal discovery algorithms for complex types of observational and experimental data. Danks is the recipient of a James S. McDonnell Foundation Scholar Award, as well as an Andrew Carnegie Fellowship. He currently serves on multiple advisory boards, including the National AI Advisory Committee.