When: Jan 10 2023 @ 11:00 AM

You are invited to the January virtual talk in a series co-presented by the JHU Institute for Assured Autonomy and Computer Science Department, featuring national scholars presenting new research and development at the intersection of autonomy and assurance.​

This talk will be “Adversarial Robustness and Forensics for Deep Neural Networks” featuring speaker Ben Zhao, Neubauer Professor of Computer Science at University of Chicago, presenting virtually on Tuesday, January 10th at 11 a.m. ET. The seminar is open to the public.

ABSTRACT:

Despite their tangible impact on a wide range of real world applications, deep neural networks are known to be vulnerable to numerous attacks, including inference time attacks based on adversarial perturbations, as well as training time attacks such as backdoors. The security community has done extensive work to explore both attacks and defenses, only to produce a seemingly endless cat-and-mouse game.

In this talk, I will talk about some of our recent work into ML digital forensics. I start by summarizing our goal to seek a broader, more pragmatic view towards adversarial robustness, beyond the current static, binary views of attack and defense. Like real world security systems, we believe given sufficient incentive and resources, attackers will eventually succeed in compromising DNN systems. Just as in traditional security realms, digital forensics tools can serve dual purposes: identifying the sources of the compromise so that they can be mitigated, while also providing a strong deterrent against future attackers.  I will present recent results from two papers in this space (Usenix Security and CCS 2022), where we explore the role of post-attack forensics for improving DNN robustness against both poisoning attacks and adversarial examples. Against poisoning attacks, we show how to use forensic evidence to identify subsets of training data responsible for the attack. Against adversarial examples, we show how to survive and recover from server breaches that allow attackers to gain full access to proprietary DNN models.