Published:
Author: Jaimie Patterson
Alessa Carbo holds her small white dog, Frida.
Alessa Carbo and her research assistant, Frida.

Third-year undergraduate Alessa Carbo had never set foot on a university campus before she came to Johns Hopkins to study computer science.

Her hometown of Cabo San Lucas, Mexico had no library, and its high school didn’t offer any computer science classes—but that didn’t stop her from teaching herself about programming and machine learning.

“I had internet access and an intense curiosity about the world,” she says. “By the time I got to Hopkins, I knew I wanted to look into research opportunities, though I still wasn’t sure what that would necessarily look like.”

As a freshman, she attended seminars hosted by the Center for Language and Speech Processing (CLSP) and got involved with the Artificial Intelligence Society at Johns Hopkins, which is how she got her start in the field of sign language processing.

“I felt this was a unique and under-explored area in all this natural language processing research I was getting exposed to,” she says.

Eric Nalisnick stands next to Alessa Carbo, holding her small white dog Frida, in the breezeway between Malone and Mason Halls.

Eric Nalisnick and Alessa Carbo. (And Frida.)

In the summer of 2024, Carbo participated in CLSP’s annual Frederick Jelinek Memorial Summer Workshop on Speech and Language Technology, working with a team from the Czech Republic on translating sign language video data to English text by building a custom vision-language AI model. This was also when she met her mentor Eric Nalisnick, an assistant professor of computer science and a member of the Johns Hopkins Data Science and AI Institute.

They began working together the very next semester on using deep learning models for sign language processing—work that was just published in the Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP).

“Getting a paper accepted to EMNLP as a first author is an achievement for a PhD student—let alone a then-sophomore taking a full course load,” says Nalisnick.

Alessa Carbo sits on brick steps holding her small white dog, Frida.Carbo’s research focus has since shifted towards AI safety—leading her to join the highly selective Machine Learning Alignment and Theory Scholars fellowship program this summer—but she still works with Nalisnick on various projects, including another paper she first-authored that will appear at the 14th International Conference on Learning Representations, and she says she’s grateful for the time she spent working on sign language processing.

“It’s an underserved field that sits at the intersection of computer vision, natural language processing, and linguistics,” she says. “I’d encourage anyone interested in those areas to consider diving in.”

Even though the data is messy, the problems aren’t straightforward, and you have to get creative, sign language processing ended up being Carbo’s gateway into another world, she says.

“Looking back, it’s surreal how much has changed; this year alone, I’ve been to conferences in China, Austria, the Czech Republic, and all across the U.S.,” she says. “My research has easily been the most meaningful part of my Hopkins experience so far. I am incredibly grateful to my mentors, my family, many other people in my life, Johns Hopkins, and the combination of opportunity and support that has made all of this possible.”