Computer science faculty Mark Dredze, Jason Eisner, Peter Kazanzides, and Tom Lippincott are among the recipients of the latest round of Nexus Awards bestowed by the Johns Hopkins University.
The 38 projects funded this round span every academic division of the university and include nearly 150 scholars exploring a range of topics, including bird flu preparedness, improving primary care, AI-enhanced health care, and global humanitarian food assistance.
This is the third round of funding distributed via the Nexus Awards, a $15 million program designed to support research, teaching, and event programming at the Johns Hopkins University Bloomberg Center, which officially opened in the fall of 2023. Funding for award recipients began July 1.
“Since 2023, faculty from across our One University have harnessed Nexus Awards to bring our Hopkins Bloomberg Center to life as a hub of robust debate and dialogue,” says Ron Daniels, the president of Johns Hopkins University. “We are grateful to sustain this tradition with a third cohort of Nexus Awards recipients, who will continue to mobilize ideas, expertise, and insights to help address society’s most challenging concerns.”
All projects below won awards in the Convening category, which provides funding for the development and execution of an academic or policy-focused conference or conference series.
John C. Malone Professor of Computer Science Mark Dredze is one of the investigators on Responsible AI for Health Symposium (RAIHS) at HBC.
Organized by the Carey Business School’s Center for Digital Health and Artificial Intelligence and slated for Fall 2025 in Washington, D.C., this symposium aims to bring together roughly 150 experts across academia, government, health care, and industry to address emerging challenges in the responsible use of AI in health care, focusing on fairness, transparency, governance, safety, and real-world applications. The event will directly address one of the most pressing challenges in modern health care: how to safely and ethically integrate AI technologies into patient care, clinical workflows, and health systems.
“RAIHS brings together diverse voices to discuss one of the most critical issues in AI: how to ensure that rapidly evolving medical AI systems are safe, ethical, and equitable,” says Dredze. “I’m excited to help shape these conversations and highlight engineering innovations that can advance responsible AI practices.”
AI as a Participant in Democratic Deliberation: Exploring the Future of Convenings—on which Jason Eisner, a professor of computer science, is an investigator—outlines an immersive two-day event that experiments with new forms of convening at the intersection of AI, small-group democratic deliberation, and participatory theater.
“There is a preponderance of academic discourse and public debate about the far-reaching impact of AI on democracy. The purpose of this project is to complement that ongoing discussion with a visceral, embodied experience of what deliberation—both in collaboration and in tension with AI agents—might look and feel like in a future, speculative democratic setting,” says co-lead Graham Sack, an assistant research professor at the Berman Institute of Bioethics and a lecturer in the Krieger School’s Film and Media Graduate Program.
Instead of traditional panels and presentations, event participants will engage in live, visceral experiments that challenge conventional notions of debate, consensus-building, and collective decision-making. By embedding AI into these experiences in a variety of novel roles—as interlocutor, judge, mirror, and sense-making assistant—creative practitioners from the Immersive Storytelling and Emerging Technologies concentration at the Film and Media Graduate Program, computer scientists from the Center for Speech and Language Processing, democracy scholars from the SNF Agora Institute, and technology ethicists from the Berman Institute will explore how emerging technologies can reshape the democratic process, not just in theory, but in lived experience.
“Dialogue often shapes our thinking by bringing in new facts and perspectives; we talk with friends or expert advisors when making medical, legal, or financial plans—and even when deciding how to arrange our personal lives. It will soon become normal for AI to join in these discussions,” says Eisner. “AI might also help in the wider discussion of how to arrange our society, which is what this project is about: Humans have to make the final decisions, but talking to AI is beneficial when we have reason to believe that it’s leading to better decisions.”
The project Humans in an Autonomous and Robotic World: Exploring and Enabling Human-Machine Partnerships for Sustained Deep-Space Presence Including Moon and Mars includes Peter Kazanzides, a research professor in the Department of Computer Science.
The proposal calls for a multi-institutional and stakeholder workshop including government, academia, private space industry and non-profit/policy organizations to explore requirements and build collaborations to grow a new space economy that involves human-machine partnerships to ensure human performance and accelerate our ability in creating a sustained government and industry presence in space.
Tom Lippincott, an associate research professor in the Department of Computer Science, is involved in the project Civic Discussions on Reality-Distorting Technologies.
The summit will bring together stakeholders from academia, industry, and policy to discuss the dangers and opportunities of AI influencing the way humans understand the world. There is ample evidence of how vulnerable human worldviews are to manipulation and pathological error, and AI models are capable of serious harm and benefit now that they can easily engage under a variety of helpful—or disruptive—personas.
“My interest, for the computational humanities, is in how models can be designed to reflect human history, culture, and cognition,” says Lippincott. “This discussion is a rich source of ‘in the wild’ observations of these forces interacting and evolving.”
Learn more about the Nexus Awards here.