Field received her PhD from the Language Technologies Institute at Carnegie Mellon University. Prior to joining Johns Hopkins, she was a postdoctoral researcher in the Stanford NLP Group and the Stanford Data Science Institute.
Tell us a little bit about your research.
My work focuses on natural language processing, and within that area I focus on social domains and ethics. I think of my research agenda as comprised of three broad, intersecting directions: 1) developing NLP methods to identify and understand social issues, like stereotyping and propaganda; 2) developing NLP methods for social good applications, with a focus on public service domains; and 3) reflecting these investigations back on my own field by examining the potential harms of NLP and artificial intelligence.
These directions often overlap. For example, if we want NLP to be for “social good,” it needs to be ethical. If we want to identify and mitigate “bias” in our models, we need social science perspectives on what “bias” is. I mostly work with text data, but I work a little with speech data, as well.
Tell us about a project you are excited about.
One of my ongoing projects focuses on investigating the risks and opportunities of NLP in child protective services. This is a long-running project that has involved collaborations with a child protective services agency. Much of our work has focused on building an understanding of the data and investigating where NLP technology can increase harm: What existing data biases are models liable to amplify? Where might this technology be increasing power imbalances? Can we improve privacy preservation in the research and development process?
The findings and technology that we develop in this space have the potential to aid in our understanding of deploying NLP or AI more generally in high-stakes domains. My lab is continuing to explore similar questions in other domains, as well.
Why this? What drives your passion for your field?
After I finished my undergraduate degree, I worked for a few years as a software developer before pursuing graduate school. My main motivation in going back to school was to be able to work on projects that have a positive impact on society. This same motivation continues to drive my research agenda.
This is a particularly interesting time to be working in NLP. The field has been changing so rapidly and there is intense interest in trying to use this technology in many places, but we’re still just starting to understand the possible risks and potential harm of NLP models.
What classes are you teaching?
In the fall, I taught a graduate seminar-style course, AI Ethics and Social Impact. In this course, we read recent research in the field of AI ethics and discussed how we could build on the work or apply learnings to our own work. This spring, I am teaching a course on NLP for computational social science, which focuses on how to uncover and analyze social phenomena that manifest in textual data.
Why are you excited to be joining the Johns Hopkins Department of Computer Science?
JHU has an amazing community of researchers working on challenging topics in NLP and AI; I’m excited to be able to collaborate with the talented students and faculty here. I’m also excited about the support and opportunities for interdisciplinary research and being able to work with researchers both inside and outside the department.