Johns Hopkins Algorithms and Complexity
https://www.cs.jhu.edu/~mdinitz/theory
America/New_York
America/New_York
America/New_York
20211107T020000
-0400
-0500
EST
20220313T020000
-0500
-0400
EDT
ai1ec-272@www.cs.jhu.edu/~mdinitz/theory
20220124T033744Z
Speaker: Xue Chen
Affiliation: Northwestern University
Title: Active Regression via Linear-Sample Sparsification
Abstract:
We present an approach that improves the sample complexity for a variety of curve fitting problems, including active learning for linear regression, polynomial regression, and continuous sparse Fourier transforms. In the active linear regression problem, one would like to estimate the least squares solution \beta^* minimizing ||X \beta – y||_2 given the entire unlabeled dataset X \in \R^{n \times d} but only observing a small number of labels y_i. We show that O(d/\eps) labels suffice to find an \eps-approximation \wt{\beta} to \beta^*:
E[||X \wt{\beta} – X\beta^*||_2^2] \leq \eps ||X \beta^* – y||_2^2.
This improves on the best previous result of O(d \log d d/\eps) from leverage score sampling. We also present results for the inductive setting, showing when \wt{\beta} will generalize to fresh samples; these apply to continuous settings such as polynomial regression. Finally, we show how the techniques yield improved results for the non-linear sparse Fourier transform setting.
Bio: Xue Chen is broadly interested in randomized algorithms and the use of randomness in computation. Specific areas include Fourier transform, learning theory and optimization, and pseudorandomness. He obtained his Ph.D. at the University of Texas at Austin, under the supervision of David Zuckerman. Currently, he is a postdoctoral fellow in Northwestern University.
20190227T120000
20190227T130000
0
[Theory Seminar] Xue Chen
free