Published:
Headshot of Daniel Khashabi.
Daniel Khashabi

Khashabi obtained a PhD from the University of Pennsylvania and a BSc from Amirkabir University of Technology (Tehran Polytechnic). Before joining Johns Hopkins, he was a postdoctoral fellow at the Allen Institute for AI in Seattle.

Tell us a little bit about your research.

The goal of my research is “intelligence amplification” or building computer models that could augment human experience in various tasks in life. This contrasts with the dominant view of “artificial intelligence” that aims to replicate humans’ cognitive abilities. Ultimately, I want to see AI systems work complementary to humans – we aim to empower humans to achieve more, not replace them.

Licklider’s pioneering piece from 1960 on “Man-Computer Symbiosis,” is an excellent illustration of this. It draws a picture of a future in which intelligent machines work in a mutually interdependent fashion with humans and complement each other’s strengths.

A recent example is GitHub’s CoPilot, a tool to help programmers write code faster. It reads the context of the existing code and uses comments to generate recommendations for effective completions. Studies on human users have found that such tools increase programmer productivity on daily programming tasks. I personally use this tool whenever I get a chance to code, and I find it extremely helpful.

So, how do we accelerate the vision for turning models into effective “amplifiers” of human experience? Here are three broad and inter-related problems that we can investigate: (a) learning rich and generalizable representations of language and physical environments, (b) developing models to rationalize or explain decisions to human users, and (c) enabling models for continually improve via their interactions with users and the world.

Tell us about a project you are excited about.

Broadly, I am excited about developing computational models that can empower humans by translating language commands into appropriate actions in their environment. Several of my recent works seek to enable models to align with human language commands.

One important future application of this could be helping users with mobility impairments, or those who have difficulty operating computers with a keyboard or mouse. According to Pew research, there are over 24M people in the USA with such difficulties and they are unlikely to own a digital device, effectively isolating them from many technological ecosystems. However, this can be changed if we have models that can help people seamlessly browse the web only through voice commands. The idea that our research could make technology accessible to millions of people who wouldn’t have access otherwise, is an exciting future avenue for our research.

Why this? What drives your passion for your field?

Random coincidence! I’ve always been an avid soccer fan. Immediately after I started learning C++ programming in high school, I joined a few friends to build computer simulations of soccer playing robots. It took us several months of spaghetti coding to realize that building “intelligent” agents that can coordinate to accomplish a given goal is a non-trivial problem. That led me to study the scientific research behind this problem, a subsection of AI that focuses on multi-agent systems. Over time, that grew to be the broader field of AI and machine learning we know today.

In graduate school, I became really interested in studying AI in conjunction with human language, our primary means of communication and coordination.  In my mind, this is very similar to a soccer game; but instead of passing balls, we are passing language messages back and forth.

There is something truly magical about human communication. Think about the last time you tried bouncing  ideas off a friend or colleague. Those ideas got better by building on the feedback you received, right? The coming decade of natural language processing research will be about enabling this kind of communication.

On a personal level, I wish to build an assistant for myself that could help me use my time more efficiently. A big chunk of my time is spent replying to emails, coordinating meetings, managing deadlines – many repetitive tasks that can be automated. The day that I build this assistant … I will go back to my office to work more efficiently!

What classes are you teaching?

This semester I’m teaching a graduate course on self-supervised algorithms, which explores all the cutting-edge developments in the field and their applications. Many of these algorithmic developments haven’t even been translated to products yet, so in the next few years we are going to see them translated into applications that will change our lives.

Next semester I’m teaching an undergraduate version of this course where the students will gain a thorough hands-on introduction to self-supervised learning techniques for NLP applications through a variety of assignments and projects. My hope is that students in this course will learn the necessary skills to design, implement, and understand their own self-supervised neural network models.

How excited are you to join the Johns Hopkins Department of Computer Science?

Johns Hopkins has one of the greatest set of people working on language technologies. In many ways, the diversity of problems and ideas here is what attracted me to the university.

To me, the best part of academia is that you have the freedom to explore any problem you care about. I’m surrounded by so many talented researchers, so new ideas and problems pop up all the time, even in casual conversations. There’s a great sense of adventure here — no two days are the same.

Besides your work, what are some of your other hobbies and passions?

I enjoy listening to podcasts and audiobooks on my commute. These days I’m listening to “From Bacteria to Bach and Back” by Daniel C. Dennet. One of my favorite hobbies is swing dancing and am hoping to get back into that once I settle into life in Baltimore.