BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Computer Science - ECPv5.12.3//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Computer Science
X-ORIGINAL-URL:https://www.cs.jhu.edu
X-WR-CALDESC:Events for Department of Computer Science
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201020T104500
DTEND;TZID=America/New_York:20201020T120000
DTSTAMP:20220112T093849
CREATED:20210629T210724Z
LAST-MODIFIED:20210629T210724Z
UID:1962490-1603190700-1603195200@www.cs.jhu.edu
SUMMARY:CS Seminar Series: Jeremias Sulam\, Johns Hopkins University\, Johns Hopkins University – “Overparameterized and Adversarially Robust Sparse Models”
DESCRIPTION:LocationZoom: https://wse.zoom.us/j/93897562229AbstractSparsity has been a driving force in signal & image processing and machine learning for decades. In this talk we’ll explore sparse representations based on dictionary learning techniques from two perspectives: over-parameterization and adversarial robustness. First\, we will characterizes the surprising phenomenon that dictionary recovery can be facilitated by searching over the space of larger (over-realized/parameterized) models. This observation is general and independent of the specific dictionary learning algorithm used. We will demonstrate this observation in practice and provide a theoretical analysis of it by tying recovery measures to generalization bounds. We will further show that an efficient and provably correct distillation mechanism can be employed to recover the correct atoms from the over-realized model\, consistently providing better recovery of the ground-truth model.We will then switch gears towards the analysis of adversarial examples\, focusing on the hypothesis class obtained by combining a sparsity-promoting encoder coupled with a linear classifier\, and show an interesting interplay between the flexibility and stability of the (supervised) representation map and a notion of margin in the feature space. Leveraging a mild encoder gap assumption in the learned representations\, we will provide a bound on the generalization error of the robust risk to L2-bounded adversarial perturbations and a robustness certificate for end-to-end classification. We will demonstrate the applicability of our analysis by computing certified accuracy on real data\, and comparing with other alternatives for certified robustness. This analysis will shed light on to how to characterize this interplay for more general models.BioJeremias Sulam is an assistant professor at the Biomedical Engineering department at JHU\, and a faculty member of the Mathematical Institute for Data Science (MINDS) and the Center for Imaging Science (CIS). He received his PhD in Computer Science from the Technion-Israel Institute of Technology\, in 2018. He is the recipient of the Best Graduates Award of the Argentinean National Academy of Engineering. His research interests include machine learning\, signal and image processing\, representation learning and their application to biomedical sciences.HostDepartment of Computer ScienceVideoWatch seminar video.
URL:https://www.cs.jhu.edu/event/cs-seminar-series-jeremias-sulam-johns-hopkins-university-johns-hopkins-university-overparameterized-and-adversarially-robust-sparse-models/
END:VEVENT
END:VCALENDAR