BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Computer Science - ECPv5.12.3//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Computer Science
X-ORIGINAL-URL:https://www.cs.jhu.edu
X-WR-CALDESC:Events for Department of Computer Science
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190418T103000
DTEND;TZID=America/New_York:20190418T113000
DTSTAMP:20220112T093754
CREATED:20210629T210718Z
LAST-MODIFIED:20210629T210718Z
UID:1962275-1555583400-1555587000@www.cs.jhu.edu
SUMMARY:CS Seminar: Ke Li\, UC Berkeley – “Nearest Neighbour Search and Generative Modelling”
DESCRIPTION:LocationHackerman Hall B-17AbstractMachine learning is subject to the limits of computation\, and advances in algorithms can open up new possibilities for machine learning. The problem of nearest neighbour search arises commonly in machine learning; unfortunately\, despite over 40 years of research\, prior sublinear algorithms for exact nearest neighbour search suffer from the curse of dimensionality\, that is\, an exponential dependence of query time complexity on either the ambient or the intrinsic dimensionality. In the first part of this talk\, I will present Dynamic Continuous Indexing (DCI)\, a new family of exact randomized algorithms that avoids exponential dependence on both the ambient and the intrinsic dimensionality. This advance enables us to develop a new method for generative modelling\, known as Implicit Maximum Likelihood Estimation (IMLE)\, which I will present in the second part of the talk. IMLE can be shown to be equivalent to maximum likelihood under some conditions and simultaneously overcomes three fundamental issues of generative adversarial nets (GANs)\, namely mode collapse\, vanishing gradients and training instability. I will illustrate why mode collapse happens in GANs and how IMLE overcomes it\, and also demonstrate empirical results on image synthesis. I will close off with a brief discussion of another approach I introduced\, known as Learning to Optimize.BioKe Li is a Ph.D. candidate at UC Berkeley advised by Prof. Jitendra Malik. He is interested in a broad range of topics in machine learning\, and also enjoys working on computer vision\, natural language processing and algorithms. He is particularly passionate about tackling long-standing fundamental problems that cannot be tackled with a straightforward application of conventional techniques. He received his Hon. B.Sc. in Computer Science from the University of Toronto and is grateful for the support of the Natural Sciences and Engineering Research Council of Canada (NSERC).HostRaman AroraVideoWatch seminar video.
URL:https://www.cs.jhu.edu/event/cs-seminar-ke-li-uc-berkeley-nearest-neighbour-search-and-generative-modelling/
END:VEVENT
END:VCALENDAR