Refreshments are available starting at noon. The seminar will begin at 12:15 p.m.
Abstract
Modern machine learning has transformed problem-solving across diverse fields, yet it faces challenges that require structure, logic, and planning—domains where traditional symbolic reasoning excels. A promising solution to this challenge lies in the neurosymbolic paradigm, which bridges the gap between perception and reasoning. In this talk, Jiani Huang will demonstrate the potential of this approach by applying it to build LASER, a state-of-the-art foundation model for video understanding. LASER leverages the vast availability of video captions as a valuable source of weak supervisory signals for learning fine-grained video semantics. The key insight underlying LASER is a symbolic module for aligning video-caption pairs wherein captions are formulated in a domain-specific language based on finite linear temporal logic and video is structured as a spatiotemporal scene graph. This alignment process is end-to-end differentiable, enabled by a symbolic checker implemented in Scallop, a programming language tailored for the neurosymbolic paradigm. The resulting approach enables us to efficiently train a video understanding model without the need for fine-grained video annotations. Huang will conclude by discussing the broader potential of the neurosymbolic paradigm in advancing safety-critical, verifiable, and real-world applications.
Speaker Biography
Jiani Huang is a PhD candidate in computer and information science at the University of Pennsylvania. Her research focuses on neurosymbolic approaches, specifically: (1) the design and implementation of Scallop, a neurosymbolic programming language and (2) its applications across diverse fields, including natural language processing, computer vision, and medicine. Through the neurosymbolic paradigm, Huang aims to develop AI solutions that are accurate, explainable, and efficient, addressing both theoretical challenges and real-world needs. Huang was recognized as a 2023 Rising Star in Electrical Engineering and Computer Science and was a visiting scholar at Meta AI from 2022 to 2023. Her work has been published in top conferences, including the ACM SIGPLAN Conference on Programming Language Design and Implementation, the Conference on Neural Information Processing Systems, the International Conference on Machine Learning, the International Conference on Learning Representations, the AAAI Conference on Artificial Intelligence, and the Meeting of the Association for Computational Linguistics. Additionally, she coauthored a book on Scallop, which was published in the Foundations and Trends in Programming Languages series in 2024.