Artificial intelligence has begun to impact healthcare in areas including electronic health records, medical images, and genomics. But one aspect of healthcare that has been largely left behind thus far is the physical environments in which healthcare delivery takes place: hospitals and assisted living facilities, among others. In this talk I will discuss my work on endowing hospitals with ambient intelligence, using computer vision-based human activity understanding in the hospital environment to assist clinicians with complex care. I will first present an implementation of an AI-Assisted Hospital where we have equipped units at two partner hospitals with visual sensors. I will then discuss my work on human activity understanding, a core problem in computer vision. I will present deep learning methods for dense and detailed recognition of activities, and efficient action detection, important requirements for ambient intelligence. I will discuss these in the context of two clinical applications, hand hygiene compliance and automated documentation of intensive care unit activities. Finally, I will present work and future directions for integrating this new source of healthcare data into the broader clinical data ecosystem, towards full realization of an AI-Assisted Hospital.
Serena Yeung is a PhD candidate at Stanford University in the Artificial Intelligence Lab, advised by Fei-Fei Li and Arnold Milstein. Her research focuses on deep learning and computer vision algorithms for video understanding and human activity recognition. More broadly, she is passionate about using these algorithms to equip healthcare spaces with ambient intelligence, in particular an AI-Assisted Hospital. Serena is the lead graduate student in the Stanford Partnership in AI-Assisted Care (PAC), a collaboration between the Stanford School of Engineering and School of Medicine. She interned at Facebook AI Research in 2016, and Google Cloud AI in 2017. She was also co-instructor for Stanford’s CS231N course on Convolutional Neural Networks for Visual Recognition in 2017.