BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.13.64//NONSGML kigkonsult.se iCalcreator 2.20//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Johns Hopkins Algorithms and Complexity
X-WR-CALDESC:
X-FROM-URL:https://www.cs.jhu.edu/~mdinitz/theory
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20211107T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20220313T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-272@www.cs.jhu.edu/~mdinitz/theory
DTSTAMP:20220124T043723Z
CATEGORIES:
CONTACT:
DESCRIPTION:Speaker: Xue Chen\nAffiliation: Northwestern University\nTitle:
Active Regression via Linear-Sample Sparsification\nAbstract:\nWe present
an approach that improves the sample complexity for a variety of curve fi
tting problems\, including active learning for linear regression\, polynom
ial regression\, and continuous sparse Fourier transforms. In the active l
inear regression problem\, one would like to estimate the least squares so
lution \\beta^* minimizing ||X \\beta – y||_2 given the entire unlabeled d
ataset X \\in \\R^{n \\times d} but only observing a small number of label
s y_i. We show that O(d/\\eps) labels suffice to find an \\eps-approximati
on \\wt{\\beta} to \\beta^*:\n\nE[||X \\wt{\\beta} – X\\beta^*||_2^2] \\le
q \\eps ||X \\beta^* – y||_2^2.\nThis improves on the best previous result
of O(d \\log d + d/\\eps) from leverage score sampling. We also present r
esults for the inductive setting\, showing when \\wt{\\beta} will generali
ze to fresh samples\; these apply to continuous settings such as polynomia
l regression. Finally\, we show how the techniques yield improved results
for the non-linear sparse Fourier transform setting.\n \nBio: Xue Chen is
broadly interested in randomized algorithms and the use of randomness in c
omputation. Specific areas include Fourier transform\, learning theory and
optimization\, and pseudorandomness. He obtained his Ph.D. at the Univers
ity of Texas at Austin\, under the supervision of David Zuckerman. Current
ly\, he is a postdoctoral fellow in Northwestern University.
DTSTART;TZID=America/New_York:20190227T120000
DTEND;TZID=America/New_York:20190227T130000
SEQUENCE:0
SUMMARY:[Theory Seminar] Xue Chen
URL:https://www.cs.jhu.edu/~mdinitz/theory/event/theory-seminar-xue-chen/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nSpeaker: Xue
Chen

\nAffiliation: Northwestern University

\nTitle: Active Reg
ression via Linear-Sample Sparsification

\nAbstract:

\nWe pr
esent an approach that improves the sample complexity for a variety of cur
ve fitting problems\, including active learning for linear regression\, po
lynomial regression\, and continuous sparse Fourier transforms. In the act
ive linear regression problem\, one would like to estimate the least squar
es solution \\beta^* minimizing ||X \\beta – y||_2 given the entire unlabe
led dataset X \\in \\R^{n \\times d} but only observing a small number of
labels y_i. We show that O(d/\\eps) labels suffice to find an \\eps-approx
imation \\wt{\\beta} to \\beta^*:

\n\nE[||X \\wt{\\beta
} – X\\beta^*||_2^2] \\leq \\eps ||X \\beta^* – y||_2^2.

\nThis impr
oves on the best previous result of O(d \\log d + d/\\eps) from leverage s
core sampling. We also present results for the *inductive* setting\,
showing when \\wt{\\beta} will generalize to fresh samples\; these apply t
o continuous settings such as polynomial regression. Finally\, we show how
the techniques yield improved results for the non-linear sparse Fourier t
ransform setting.

\n

\nBio: Xue Chen is broadly interested in randomized al
gorithms and the use of randomness in computation. Specific areas include
Fourier transform\, learning theory and optimization\, and pseudorandomnes
s. He obtained his Ph.D. at the University of Texas at Austin\, under the
supervision of David Zuckerman. Currently\, he is a postdoctoral fellow in
Northwestern University.

\n
END:VEVENT
END:VCALENDAR