Refreshments are available starting at 10:30 a.m. The seminar will begin at 10:45 a.m.
Abstract
As predictive and generative models are increasingly being deployed in various high-stakes applications in critical domains including health care, law, policy, and finance, it is important to ensure that relevant stakeholders understand the behaviors and outputs of these models so that they can determine if and when to intervene. To this end, several techniques have been proposed in recent literature to explain these models; in addition, multiple regulatory frameworks (e.g., the General Data Protection Regulation, the California Consumer Privacy Act) introduced in recent years also emphasize the importance of enforcing the key principle of “right to explanation” to ensure that individuals who are adversely impacted by algorithmic outcomes are provided with an actionable explanation. In this talk, Himabindu Lakkaraju will discuss the gaps that exist between regulations and state-of-the-art technical solutions when it comes to explainability of predictive and generative models. She will then present some of her latest research that attempts to address some of these gaps. She will conclude her talk by discussing bigger challenges that arise as we think about enforcing right to explanation in the context of large language models and other large generative models.
Speaker Biography
Himabindu “Hima” Lakkaraju is an assistant professor at Harvard University focusing on the algorithmic, theoretical, and applied aspects of explainability, fairness, and robustness of machine learning models. Lakkaraju has been named as one of the world’s top innovators under 35 by both MIT Tech Review and Vanity Fair. She has also received several prestigious awards, including an NSF CAREER Award, an AI2050 Early Career Fellowship by Schmidt Futures, and multiple Best Paper Awards at top-tier ML conferences; she has also received grants from the NSF, Google, Amazon, J.P. Morgan, and Bayer. Lakkaraju has given keynote talks at various top ML conferences and associated workshops, including the Conference on Information and Knowledge Management, the International Conference on Machine Learning, the Conference and Workshop on Neural Information Processing Systems, the International Conference on Learning Representations, the Association for the Advancement of Artificial Intelligence, and the Conference on Computer Vision and Pattern Recognition; her research has also been showcased by popular media outlets including The New York Times, MIT Tech Review, TIME, and Forbes. More recently, she co-founded the Trustworthy ML Initiative to enable easy access to resources on trustworthy ML and to build a community of researchers and practitioners working on the topic.