Refreshments are available starting at 10:30 a.m. The seminar will begin at 10:45 a.m.
Abstract
Efficiency is increasingly tied to quality in machine learning, with more efficient training algorithms leading to more powerful models. However, today’s most popular machine learning models are built on asymptotically inefficient primitives. For example, attention in transformers scales quadratically with input size, while multilayer perceptrons scale quadratically with model dimension. In this talk, Dan Fu discusses his work on improving the efficiency of core primitives in machine learning, with an emphasis on hardware-aware algorithms and long-context applications. First, he focuses on replacing attention with gated state space models (SSMs) and convolutions, which scale sub-quadratically in context length. He describes the H3 (Hungry Hungry Hippos) architecture, a gated SSM architecture that matches transformers in quality up to 3B parameters and achieves 2.4x faster inference. Second, he focuses on developing hardware-aware algorithms for SSMs and convolutions; he describes FlashFFTConv, a fast algorithm for computing SSMs and convolutions on GPU by optimizing the fast Fourier transform (FFT). FlashFFTConv yields up to 7x speedup and 5x memory savings, even over vendor solutions from NVIDIA. Third, he will briefly touch on how these same techniques can also be used to develop sub-quadratic scaling in the model dimension. He will describe Monarch Mixer, which uses a generalization of the FFT to achieve sub-quadratic scaling in both sequence length and model dimension. Throughout the talk, he will give examples of how these ideas are beginning to take hold, with gated SSMs and their variants now leading to state-of-the-art performance in long-context language models, embedding models, and DNA foundation models.
Speaker Biography
Dan Fu is a PhD student in the Computer Science Department at Stanford University, where he is co-advised by Christopher Ré and Kayvon Fatahalian. His research interests are at the intersection of systems and machine learning. Recently, Fu has focused on developing algorithms and architectures to make machine learning more efficient, especially for enabling longer-context applications. His research has appeared as oral and spotlight presentations at the Conference on Neural Information Processing Systems, the International Conference on Machine Learning, and the International Conference on Learning Representations; he additionally received the Best Student Paper Runner-Up Award at the Conference on Uncertainty in Artificial Intelligence and has been supported by a National Defense Science and Engineering Graduate Fellowship.