Refreshments are available starting at 10:30 a.m. The seminar will begin at 10:45 a.m.
Abstract
Modern deep learning has achieved remarkable results, but the design of training methodologies largely relies on guess-and-check approaches. Thorough empirical studies of recent massive language models is prohibitively expensive, underscoring the need for theoretical insights, but classical machine learning theory struggles to describe modern training paradigms. Sadhika Malladi presents a novel approach to developing prescriptive theoretical results that can directly translate to improved training methodologies for LMs. Her research has yielded actionable improvements in model training across the LM development pipeline; for example, her theory motivates the design of MeZO, a fine-tuning algorithm that reduces memory usage by up to 12x and halves the number of GPU hours required. Throughout this talk, to underscore the prescriptiveness of her theoretical insights, Malladi will demonstrate the success of these theory-motivated algorithms on novel empirical settings published after the theory.
Speaker Biography
Sadhika Malladi is a final-year PhD student in computer science at Princeton University advised by Sanjeev Arora. Her research advances deep learning theory to capture modern-day training settings, yielding practical training improvements and meaningful insights into model behavior. She has co-organized multiple workshops, including Mathematical and Empirical Understanding of Foundation Models at the 2024 International Conference on Learning Representations and Mathematics for Modern Machine Learning at the 2024 Conference on Neural Information Processing Systems. Malladi was recently named a 2025 Siebel Scholar.