Institute for Assured Autonomy

The Institute for Assured Autonomy is operating in partnership with industry, government and academia to ensure the safe, secure, reliable, and predictable integration of autonomous systems into society by covering the full spectrum of research and application across the three pillars of technology, ecosystem, and policy & governance.

Topics of interest to IAA include adversarial Machine Learning, risk-sensitive adversarial learning for autonomous systems, explaining the behavior of Deep Learning Systems, runtime assurance of Distributed Intelligent Control Systems, regression analysis for autonomy, developing autonomous robots with safe and socially acceptable navigation and human interaction, methods to prevent bias and data leaks in autonomous systems, frameworks for designing policies to advance social benefit and manage risks in multiple domains, and many more.



Yair Amir

Raman Arora

Yinzhi Cao

Anton Dahbura

Mark Dredze

Matthew D. Green

Susan Hohenberger

Chen-Ming Huang

Abhishek Jain

Xiangyang Li

Aviel Rubin

Alan Yuille

Back to top