BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Computer Science - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Computer Science
X-ORIGINAL-URL:https://www.cs.jhu.edu
X-WR-CALDESC:Events for Department of Computer Science
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251110T140000
DTEND;TZID=America/New_York:20251110T150000
DTSTAMP:20260408T211813
CREATED:20251104T180642Z
LAST-MODIFIED:20251104T180642Z
UID:1991641-1762783200-1762786800@www.cs.jhu.edu
SUMMARY:Invited Speaker: Santiago Mazuelas\, Basque Center for Applied Mathematics
DESCRIPTION:Abstract\nThe empirical risk minimization (ERM) approach for supervised learning chooses prediction rules that fit training samples and are “simple” (generalize). This approach has been the workhorse of machine learning methods and has enabled a myriad of applications. However\, ERM methods strongly rely on the specific training samples available and cannot easily address scenarios affected by distribution shifts or corrupted samples. Robust risk minimization is an alternative approach that does not aim to fit training examples and instead chooses prediction rules minimizing the maximum expected loss (risk). This talk presents a learning framework based on the generalized maximum entropy principle that leads to minimax risk classifiers (MRCs). In particular\, MRCs can minimize worst-case expected 0–1 loss while providing performance guarantees\, and are strongly universally consistent using feature mappings given by characteristic kernels. In addition\, the methods presented can provide techniques that are effective in practical situations that defy conventional assumptions\, such as scenarios affected by distribution shifts and corrupted samples. \nSpeaker Biography\nSantiago Mazuelas received a PhD in mathematics and a PhD in telecommunications engineering from the University of Valladolid\, Spain\, in 2009 and 2011\, respectively. He is currently an Ikerbasque Research Associate Professor at the Basque Center for Applied Mathematics (BCAM). Prior to joining BCAM\, he was a staff engineer at Qualcomm Corporate Research and Development from 2014 to 2017. He previously worked from 2009 to 2014 as postdoctoral fellow and associate in the Laboratory for Information and Decision Systems at the Massachusetts Institute of Technology. Mazuelas is currently associate editor-in-chief for IEEE Transactions on Mobile Computing and associate editor for IEEE Transactions on Wireless Communications. He received a IEEE Communications Society (ComSoc) Fred W. Ellersick Prize in 2012\, an Early Achievement Award from the IEEE ComSoc in 2018\, and Spanish Society of Statistics and Operations Research–BBVA Foundation Best Contribution in Statistics and Operational Research applied to Data Science and Big Data Awards in 2022 and 2025. \nZoom link >>
URL:https://www.cs.jhu.edu/event/invited-speaker-santiago-mazuelas-basque-center-for-applied-mathematics/
LOCATION:228 Malone Hall
CATEGORIES:Seminars and Lectures
END:VEVENT
END:VCALENDAR