BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Computer Science - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Computer Science
X-ORIGINAL-URL:https://www.cs.jhu.edu
X-WR-CALDESC:Events for Department of Computer Science
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250318T103000
DTEND;TZID=America/New_York:20250318T120000
DTSTAMP:20260425T115050
CREATED:20250304T151202Z
LAST-MODIFIED:20250304T151435Z
UID:1986573-1742293800-1742299200@www.cs.jhu.edu
SUMMARY:CS Seminar Series: Towards Real-World Models
DESCRIPTION:Refreshments are available starting at 10:30 a.m. The seminar will begin at 10:45 a.m. \nAbstract\nCurrent artificial intelligence systems can synthesize images\, solve math problems\, and write code. Despite these advances\, they still struggle with basic tasks that humans and animals perform effortlessly. One potential idea why humans and animals can perform basic tasks well is because they have a predictive world model that integrates perception\, reasoning\, and planning capabilities. Can we build such a model in a bottom-up fashion from sensorimotor data and primarily visual observations? \nIn this talk\, Amir Bar will propose a path toward building such a world model. He will introduce Visual Prompting\, a new paradigm that unifies many computer vision tasks and can readily adapt pre-trained models to novel tasks without fine-tuning. Building on this\, Bar will present an extension to planning using generative world models—showing that action-conditioned video models can act as simulators of the environment that support real-world decision-making\, with a case study in visual navigation. Finally\, Bar will discuss future directions for improving the capabilities of world models and the challenges we face to enable their real-world deployment. \nSpeaker Biography\nAmir Bar is a postdoctoral researcher at Meta AI\, working on self-supervised learning with Yann LeCun. Previously\, he completed his PhD at Tel Aviv University and was a visiting PhD student at the University of California\, Berkeley’s Artificial Intelligence Research Lab\, where he was advised by Amir Globerson and Trevor Darrell. Bar began his PhD following the acquisition of the startup Zebra Medical Vision\, where he led the AI team and developed multiple FDA-approved algorithms currently in clinical use worldwide. His work on video models won the Ego4D PNR Temporal Localization Challenge at the 2022 Conference on Computer Vision and Pattern Recognition. \nZoom link >>
URL:https://www.cs.jhu.edu/event/cs-seminar-series-towards-real-world-models/
LOCATION:228 Malone Hall
CATEGORIES:Seminars and Lectures
END:VEVENT
END:VCALENDAR