

Title: Dense Error Correction via L1 Minimization
Abstract:
In this talk, we discuss the problem of recovering a non-negative sparse signal x in Re^n from highly corrupted linear measurements y = Ax + e in Re^m, where e is an unknown error vector whose nonzero entries may be unbounded. Motivated by an observation from face recognition in computer vision, we prove that for highly correlated (and possibly overcomplete) dictionaries A, any non-negative, sufficiently sparse signal x can be recovered by solving an L1-minimization problem: min ||x||_1 + ||e||_1 subject to y = Ax + e. More precisely, if the fraction of errors is bounded away from one and the support of x grows sublinearly in the dimension m of the observation, then as m goes to infinity, the above L1-minimization succeeds for all signals x and almost all sign-and-support patterns of e. This result suggests that accurate recovery of sparse signals is possible and computationally feasible even with nearly 100% of the observations corrupted. The proof relies on a careful characterization of the faces of a convex polytope spanned together by the standard cross polytope and a set of iid Gaussian vectors with nonzero mean and small variance, which we call the "cross-and-bouquet" model. Simulations and experimental results corroborate our findings, and suggest intriguing implications and extensions to our results.