Summer 2022

Video Recording >>

IAA & CS Seminar Series

June 21, 2022

Dr. Marzyeh Ghassemi focuses on creating and applying machine learning to understand and improve health in ways that are robust, private and fair. Health is important, and improvements in health improve lives. However, we still don’t fundamentally understand what it means to be healthy, and the same patient may receive different treatments across different hospitals or clinicians as new evidence is discovered, or individual illness is interpreted. Dr. Ghassemi will talk about her work trying to train models that do not learn biased rules or recommendations that harm minorities or minoritized populations. The Healthy ML group tackles the many novel technical opportunities for machine learning in health, and works to make important progress with careful application to this domain.

Speaker Biography: Dr. Marzyeh Ghassemi is an Assistant Professor at MIT in Electrical Engineering and Computer Science (EECS) and Institute for Medical Engineering & Science (IMES), and a Vector Institute faculty member holding a Canadian CIFAR AI Chair and Canada Research Chair. She holds MIT affiliations with the Jameel Clinic and CSAIL. Professor Ghassemi holds a Herman L. F. von Helmholtz Career Development Professorship, and was named a CIFAR Azrieli Global Scholar and one of MIT Tech Review’s 35 Innovators Under 35. Previously, she was a Visiting Researcher with Alphabet’s Verily and an Assistant Professor at University of Toronto. Prior to her PhD in Computer Science at MIT, she received an MSc. degree in biomedical engineering from Oxford University as a Marshall Scholar, and B.S. degrees in computer science and electrical engineering as a Goldwater Scholar at New Mexico State University.

Video Recording >>

CS Seminar Series

July 7, 2022

Applications often have fast-paced release schedules, but adoption of software dependency updates can lag by years, leaving applications susceptible to security risks and unexpected breakage. To address this problem, we present UPGRADVISOR, a system that reduces developer effort in evaluating dependency updates and can, in many cases, automatically determine which updates are backward-compatible versus API-breaking. UPGRADVISOR introduces a novel co-designed static analysis and dynamic tracing mechanism to gauge the scope and effect of dependency updates on an application. Static analysis prunes changes irrelevant to an application and clusters relevant ones into targets. Dynamic tracing needs to focus only on whether targets affect an application, making it fast and accurate. UPGRADVISOR handles dynamic interpreted languages and introduces call graph over-approximation to account for their lack of type information and selective hardware tracing to capture program execution while ignoring interpreter machinery. We have implemented UPGRADVISOR for Python and evaluated it on 172 dependency updates previously blocked from being adopted in widely-used open-source software, including Django, aws-cli, tfx, and Celery. UPGRADVISOR automatically determined that 56% of dependencies were safe to update and reduced by more than an order of magnitude the number of code changes that needed to be considered by dynamic tracing. Evaluating UPGRADVISOR’s tracer in a production-like environment incurred only 3% overhead on average, making it fast enough to deploy in practice. We submitted safe updates that were previously blocked as pull requests for nine projects, and their developers have already merged most of them.

Speaker Biography: Yaniv is a post-doc at Columbia University working with Junfeng Yang. His research focuses on improving the reliability and safety of software. He is broadly interested in program analysis, systems, and machine learning. He received his PhD from the Technion, where he was advised by Eran Yahav.

Video Recording >>

CS Seminar Series

July 14, 2022

Over the past few decades, Mixed Reality has emerged as a technology capable of enriching human perception by generating virtual content that consistently co-exists and interacts with the real world. Although this content can be delivered through any of the senses, vision-based applications have drawn particular attention from the research community. This Mixed Reality modality has proven particularly valuable in guiding users during tasks that require the manipulation and alignment of real and virtual objects. However, correctly estimating the virtual content’s depth remains challenging and frequently leads to inaccurate placement of the objects of interest.

This talk introduces fundamental concepts of visual perception and their relevance during the design and implementation of Mixed Reality applications. It explores how our visual system uses multiple cues to gather information from the environment and estimate the depth of the objects, as well as how reproducing these cues is particularly challenging when creating Mixed Reality experiences. In addition, it demonstrates the relevance of integrating these concepts to enhance the perception of users of this technology. Finally, it showcases how these fundamental concepts can be transferred into medical applications and discuss how they can shape the future of healthcare.

Speaker Biography: Alejandro Martin Gomez is a postdoctoral fellow in the Laboratory of Computing Sensing and Robotics at Johns Hopkins University. Before joining Johns Hopkins, Alejandro completed his Ph.D. in Computer Science at the Technical University of Munich, from which he graduated summa cum laude. His research interests include the study of fundamental concepts of visual perception and their transferability to medical applications that involve using augmented and virtual reality. His work has been published in some of the most prestigious journals and conferences in these fields, including the IEEE International Symposium on Mixed and Augmented Reality, the IEEE Conference on Virtual Reality and 3D User Interfaces, and the IEEE Transactions on Visualization and Computer Graphics. Alejandro has also served as a mentor and advisor for several students and scholars at the Technical University of Munich, the Johns Hopkins University, and more recently at the Friedrich-Alexander University of Erlangen-Nürnberg. In addition, he is a member of several editorial activities and has participated as a program committee member of the International Symposium on Mixed and Augmented Reality in 2016, 2018, and 2021.

Video Recording >>

IAA & CS Seminar Series

July 19, 2022

Neural networks have become a crucial element in modern artificial intelligence. When applying neural networks to mission-critical systems such as autonomous driving and aircraft control, it is often desirable to formally verify their trustworthiness such as safety and robustness. In this talk, I will first introduce the problem of neural network verification and the challenges of guaranteeing the behavior of a neural network given input specifications. Then, I will discuss the bound-propagation-based algorithms (e.g., CROWN and beta-CROWN), which are efficient, scalable and powerful techniques for formal verification of neural networks and can also be generalizable to computational graphs beyond neural networks. My talk will highlight state-of-the-art verification techniques used in our α,β-CROWN (alpha-beta-CROWN) verifier that won the 2nd International Verification of Neural Networks Competition (VNN-COMP’21), as well as novel applications of neural network verification.

Speaker Biography: Huan Zhang is a postdoctoral researcher at CMU, supervised by Prof. Zico Kolter. He received his Ph.D. degree at UCLA in 2020. Huan’s research focuses on the trustworthiness of artificial intelligence, especially on developing formal verification methods to guarantee the robustness and safety of machine learning. Huan was awarded an IBM Ph.D. fellowship and he led the winning team in the 2021 International Verification of Neural Networks Competition. Huan received the 2021 AdvML Rising Star Award sponsored by MIT-IBM Watson AI Lab.

Video Recording >>

CS Seminar Series

July 21, 2022

Deep neural networks (DNNs) are notoriously vulnerable to maliciously crafted adversarial attacks. We conquer this fragility from the network topology perspective. Specifically, we enforce appropriate sparsity forms to serve as an implicit regularization in robust training. In this talk, I will first discuss how sparsity fixes robust overfitting and leads to superior robust generalization. Then, I will present the beneficial role sparsity played in certified robustness. Finally, I will show sparsity can also function as an effective detector to undercover the viciously injected Trojan patterns.

Speaker Biography: Tianlong Chen is currently a fourth-year Ph.D. Candidate of Electrical and Computer Engineering at the University of Texas at Austin, advised by Dr. Zhangyang (Atlas) Wang. Before coming to UT Austin, Tianlong received his Bachelor’s degree at the University of Science and Technology of China. His research focuses on building accurate, efficient, robust, and automated machine learning systems. Recently, Tianlong is investigating extreme sparse neural networks with undamaged trainability, expressivity, and transferability; and the implicit regularization effects of appropriate sparsity patterns on data-efficiency, generalization, and robustness. Tianlong has published more than 70+ papers at top-tier venues (NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, etc.). Tianlong is a recipient of the 2021 IBM Ph.D. Fellowship Award, 2021 Graduated Dean’s Prestigious Fellowship, and 2022 Adobe Ph.D. Fellowship Award. Tianlong has conducted research internships at Google, IBM Research, Facebook Research, Microsoft Research, and Walmart Technology.

IAA & CS Seminar Series

August 16, 2022

Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms, especially deep neural networks (DNNs), are being used in diverse domains where trustworthiness is a concern, including automotive systems, finance, healthcare, natural language processing, and malware detection. Of particular concern is the use of ML algorithms in cyber physical systems (CPS), such as self-driving cars and aviation, where an adversary can cause serious consequences. Interest in this area of research has simply exploded. In this work, we will emphasize the need for a security mindset in trustworthy machine learning, and then cover some lessons learned.

Speaker Biography: Somesh Jha received his B.Tech from Indian Institute of Technology, New Delhi in Electrical Engineering. He received his Ph.D. in Computer Science from Carnegie Mellon University under the supervision of Prof. Edmund Clarke (a Turing award winner). Currently, Somesh Jha is the Lubar Professor in the Computer Sciences Department at the University of Wisconsin (Madison). His work focuses on analysis of security protocols, survivability analysis, intrusion detection, formal methods for security, and analyzing malicious code. Recently, he has focused his interested on privacy and adversarial ML (AML). Somesh Jha has published several articles in highly-refereed conferences and prominent journals. He has won numerous best-paper and distinguished-paper awards. Prof. Jha is the fellow of the AAAS, ACM and IEEE.

IAA & CS Seminar Series

September 29, 2022

Recent advances in AI, Machine learning and Robotics have significantly enhanced the capabilities of machines. Machine intelligence is now able to support human decision making, augment human capabilities, and, in some cases, take over control from humans and act fully autonomously. Machines are becoming more tightly embedded into systems alongside humans, interacting and influencing each other in a number of ways. Such human-AI partnerships are a new form of socio-technical system in which the potential synergies between humans and machines are much more fully utilised. Designing, building, and deploying human-AI partnerships present a number of new challenges as we begin to understand their impact on our physical and mental well-being, our personal freedoms, and those of the wider society. In this talk I will focus on the challenges in designing trustworthy human-AI partnerships. I will detail the multiple elements of trust in human-AI partnerships and discuss the associated research challenges. I will also aim to identify the risks associated with human-AI partnerships and therefore determine the associated measures to mitigate these risks. I will conclude by giving a brief overview of the UKRI Trustworthy Autonomous Systems Programme (www.tas.ac.uk), a £33m programme launched in 2020 involving over 20 universities, 100+ industry partners, and over 200 researchers.

Speaker Biography: Prof. Sarvapali Ramchurn is a Professor of Artificial Intelligence, Turing Fellow, and Fellow of the Institution of Engineering and Technology. He is the Director of the UKRI Trustworthy Autonomous Systems hub (www.tas.ac.uk) and Co-Director of the Shell- Southampton Centre for Maritime Futures. He is also a Co-CEO of Empati Ltd, an AI startup working on decentralised green hydrogen technologies. His research is about the design of Responsible Artificial Intelligence for socio-technical applications including energy systems and disaster management. He has won multiple best paper awards for his research in multi-agent systems, energy management, and disaster response, and is a winner of the AXA Research Fund Award (2018) for his work on Responsible Artificial Intelligence.