BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.13.64//NONSGML kigkonsult.se iCalcreator 2.20//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Johns Hopkins Algorithms and Complexity
X-WR-CALDESC:
X-FROM-URL:https://www.cs.jhu.edu/~mdinitz/theory
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20221106T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20220313T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20230312T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-350@www.cs.jhu.edu/~mdinitz/theory
DTSTAMP:20221001T202042Z
CATEGORIES:
CONTACT:
DESCRIPTION:Speaker: Enayat Ullah\nAffiliation: Johns Hopkins University\nT
itle: Machine unlearning via algorithmic stability\nAbstract: We study the
problem of machine unlearning\, and identify a notion of algorithmic stab
ility\, Total Variation (TV) stability\, which we argue\, is suitable for
the goal of exact efficient unlearning. For convex risk minimization probl
ems\, we design TV-stable algorithms based on noisy Stochastic Gradient De
scent (SGD). Our key contribution is the design of corresponding efficient
unlearning algorithms\, which are based on constructing a (maximal) coupl
ing of Markov chains for the noisy SGD procedure. To understand the trade-
offs between accuracy and unlearning efficiency\, we give upper and lower
bounds on excess empirical and population risk of TV stable algorithms for
convex risk minimization. Our techniques generalize to arbitrary non-conv
ex functions\, and our algorithms are differentially private as well.
DTSTART;TZID=America/New_York:20210217T120000
DTEND;TZID=America/New_York:20210217T130000
LOCATION:https://wse.zoom.us/j/91450299380
SEQUENCE:0
SUMMARY:[Theory Seminar] Enayat Ullah
URL:https://www.cs.jhu.edu/~mdinitz/theory/event/theory-seminar-enayat-ulla
h/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nSpeaker: Enay
at Ullah

\nAffiliation: Johns Hopkins University

\nTitle: Machi
ne unlearning via algorithmic stability

\nAbstract: We study the pro
blem of machine unlearning\, and identify a notion of algorithmic stabilit
y\, Total Variation (TV) stability\, which we argue\, is suitable for the
goal of exact efficient unlearning. For convex risk minimization problems\
, we design TV-stable algorithms based on noisy Stochastic Gradient Descen
t (SGD). Our key contribution is the design of corresponding efficient unl
earning algorithms\, which are based on constructing a (maximal) coupling
of Markov chains for the noisy SGD procedure. To understand the trade-offs
between accuracy and unlearning efficiency\, we give upper and lower boun
ds on excess empirical and population risk of TV stable algorithms for con
vex risk minimization. Our techniques generalize to arbitrary non-convex f
unctions\, and our algorithms are differentially private as well.

\n
END:VEVENT
END:VCALENDAR