When: Jan 20 2026 @ 10:30 AM
Where: 228 Malone Hall
Categories:
Computer Science Seminar Series.

Refreshments are available starting at 10:30 a.m. The seminar will begin at 10:45 a.m.

Abstract

People with disabilities are marginalized by inaccessible social infrastructure and technology, facing various challenges in all aspects of their life. Conventional assistive technologies commonly provide generic solutions to a certain disability population and do not consider users’ individual and context differences, leading to high abandonment rate. Yuhang Zhao’s research seeks to thoroughly understand the experiences and needs of people with disabilities and create intelligent assistive technologies adaptive to user contexts, including their abilities, environments, and intents, providing effective, unobtrusive support tailored to user needs.

In this talk, Zhao will discuss how she leverages state-of-the-art artificial intelligence, augmented reality, and eye-tracking technologies to design and develop context-aware assistive technologies. She will divide user context into external factors (e.g., surrounding environments) and internal factors (e.g., intents, abilities) and present her work on scene-aware, intent-aware, and ability-aware systems, respectively. Specifically, she will discuss: (1) CookAR, a wearable scene-aware AR system that distinguishes and augments the affordance of kitchen tools (e.g., knife blade vs. knife handle) for low-vision users to facilitate safe and efficient interactions; (2) GazePrompt, an eye-tracking-based, intent-aware system that supports low-vision users in reading; and (3) FocusView, a customizable video interface that allows users with ADHD to tailor video presentations to their sensory abilities. Zhao will conclude her talk by highlighting future research directions toward AI-powered context-aware systems for people with disabilities.

Speaker Biography

Yuhang Zhao is an assistant professor in the Department of Computer Sciences at the University of Wisconsin–Madison. Her research interests lie in human-computer interaction (HCI), accessibility, augmented/virtual reality, and AI-powered systems. Zhao leads the madAbility Lab at UW–Madison to design and build intelligent interactive systems to enhance human abilities. She has frequently published at top-tier conferences and journals in the field of HCI and accessibility (e.g., the ACM Conference on Human Factors in Computing Systems, the ACM Symposium on User Interface Software and Technology, the International ACM Special Interest Group on Accessible Computing Conference on Computers and Accessibility) and has received several U.S. and international patents. Her research has been funded by various agencies, including the NSF, the National Institutes of Health, the National Institute of Standards and Technology, and corporate sponsors such as Meta and Apple. Her work has received multiple Best Paper honorable mention awards and recognitions for contribution to diversity and inclusion and has been covered by various media outlets (e.g., TNW, New Scientist). Beyond paper publications, she disseminates her research outcomes via open-source toolkits and guidelines for broader impact. Zhao received her PhD in information science from Cornell University and her BA and MS in computer science from Tsinghua University.

Zoom link »