With a little help from AI, The Hub
The U.S. economy has been rocked by the coronavirus pandemic, with stock values crashing this year during the period between late February and late March, when states began issuing stay-at-home orders. One intrepid digital company saw its stock surge, however—Zoom Video Communications, Inc., which has more than doubled its individual stock price while the economy around it crashed.
But why has Zoom become the go-to platform during the pandemic, when there are dozens of other video conferencing services out there?
The answer lies in Zoom’s intuitive interface, says Mathias Unberath, an assistant professor of computer science at Johns Hopkins Whiting School of Engineering and a member of the Malone Center for Engineering and Healthcare.
“Whether someone is hosting a work meeting or a baby shower, the app is easy to use,” he says. “Products such as Zoom, the iPhone, and the Johns Hopkins COVID-19 tracker (a user-friendly, visually-engaging dashboard that tracks global cases and trends in real time) attract millions of users because they offer a pleasant and smooth user experience.”
To prepare future engineers to design similar technologies, Unberath developed a new course on human-centered design for artificial intelligence systems. Offered last spring for the first time and slated again for this fall, “Machine Learning: Artificial Intelligence System Design and Development,” teaches students to design, develop, and train an AI system that could benefit someone’s life or help solve a real problem.
“Artificial intelligence is maturing and there are great opportunities to integrate this technology in everyday life,” says Unberath, who used his expertise to help develop a more accurate outbreak model that predicts new pandemic hotspots, released in late April. “In this class, students must design their algorithms with the end user’s needs and wants in mind. What true problems are people facing, and how do people want to interact with technology?”
When designing technology for humans, it’s important for engineers to understand how their systems could potentially fail the user, Unberath says. That is why, as the semester progressed and students built increasingly complicated prototypes, they adjusted their system designs based on user feedback from fellow students. Students also participated in discussions on the issues of biases, fairness, and ethics when developing artificial intelligence that simplifies or automates potentially sensitive tasks.
Read more at The Hub.