BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.13.76//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Computer Science
X-WR-CALDESC:The New Age of Discovery
X-FROM-URL:https://www.cs.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20201101T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20211107T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20210314T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20220313T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:seminar-590@www.cs.jhu.edu
DTSTAMP:20201019T194155Z
CATEGORIES;LANGUAGE=en-US:Seminar
CONTACT:
DESCRIPTION:Location\nZoom: https://wse.zoom.us/j/93897562229\nAbstract\nSp
arsity has been a driving force in signal & image processing and machine l
earning for decades. In this talk we’ll explore sparse representations bas
ed on dictionary learning techniques from two perspectives: over-parameter
ization and adversarial robustness. First\, we will characterizes the surp
rising phenomenon that dictionary recovery can be facilitated by searching
over the space of larger (over-realized/parameterized) models. This obser
vation is general and independent of the specific dictionary learning algo
rithm used. We will demonstrate this observation in practice and provide a
theoretical analysis of it by tying recovery measures to generalization b
ounds. We will further show that an efficient and provably correct distill
ation mechanism can be employed to recover the correct atoms from the over
-realized model\, consistently providing better recovery of the ground-tru
th model.\nWe will then switch gears towards the analysis of adversarial e
xamples\, focusing on the hypothesis class obtained by combining a sparsit
y-promoting encoder coupled with a linear classifier\, and show an interes
ting interplay between the flexibility and stability of the (supervised) r
epresentation map and a notion of margin in the feature space. Leveraging
a mild encoder gap assumption in the learned representations\, we will pro
vide a bound on the generalization error of the robust risk to L2-bounded
adversarial perturbations and a robustness certificate for end-to-end clas
sification. We will demonstrate the applicability of our analysis by compu
ting certified accuracy on real data\, and comparing with other alternativ
es for certified robustness. This analysis will shed light on to how to ch
aracterize this interplay for more general models.\nBio\nJeremias Sulam is
an assistant professor at the Biomedical Engineering department at JHU\,
and a faculty member of the Mathematical Institute for Data Science (MINDS
) and the Center for Imaging Science (CIS). He received his PhD in Compute
r Science from the Technion-Israel Institute of Technology\, in 2018. He i
s the recipient of the Best Graduates Award of the Argentinean National Ac
ademy of Engineering. His research interests include machine learning\, si
gnal and image processing\, representation learning and their application
to biomedical sciences.\nHost\nDepartment of Computer Science
DTSTART;TZID=America/New_York:20201020T104500
DTEND;TZID=America/New_York:20201020T120000
SEQUENCE:0
SUMMARY:CS Seminar Series: Jeremias Sulam\, Johns Hopkins University\, John
s Hopkins University – “Overparameterized and Adversarially Robust Sparse
Models”
URL:https://www.cs.jhu.edu/events/cs-seminar-series-jeremias-sulam-johns-ho
pkins-university-johns-hopkins-university-tba/
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\n#### Location

\nZoom: https://wse.zoom.us/j/93897562229

\n#### Abstract

\nS
parsity has been a driving force in signal & image processing and machine
learning for decades. In this talk we’ll explore sparse representations ba
sed on dictionary learning techniques from two perspectives: over-paramete
rization and adversarial robustness. First\, we will characterizes the sur
prising phenomenon that dictionary recovery can be facilitated by searchin
g over the space of larger (over-realized/parameterized) models. This obse
rvation is general and independent of the specific dictionary learning alg
orithm used. We will demonstrate this observation in practice and provide
a theoretical analysis of it by tying recovery measures to generalization
bounds. We will further show that an efficient and provably correct distil
lation mechanism can be employed to recover the correct atoms from the ove
r-realized model\, consistently providing better recovery of the ground-tr
uth model.

\nWe will then switch gears towards the analysis of advers
arial examples\, focusing on the hypothesis class obtained by combining a
sparsity-promoting encoder coupled with a linear classifier\, and show an
interesting interplay between the flexibility and stability of the (superv
ised) representation map and a notion of margin in the feature space. Leve
raging a mild encoder gap assumption in the learned representations\, we w
ill provide a bound on the generalization error of the robust risk to L2-b
ounded adversarial perturbations and a robustness certificate for end-to-e
nd classification. We will demonstrate the applicability of our analysis b
y computing certified accuracy on real data\, and comparing with other alt
ernatives for certified robustness. This analysis will shed light on to ho
w to characterize this interplay for more general models.

\n#### Bio

\nJeremias Sulam is an assistant professor at the Biomedical Engineeri
ng department at JHU\, and a faculty member of the Mathematical Institute
for Data Science (MINDS) and the Center for Imaging Science (CIS). He rece
ived his PhD in Computer Science from the Technion-Israel Institute of Tec
hnology\, in 2018. He is the recipient of the Best Graduates Award of the
Argentinean National Academy of Engineering. His research interests includ
e machine learning\, signal and image processing\, representation learning
and their application to biomedical sciences.

\n#### Host

\nDepa
rtment of Computer Science

\n
END:VEVENT
END:VCALENDAR