In 2025, artificial intelligence found itself more integrated into everyday life, from self-driving cars to medical devices for improved disease diagnosis. As companies race to rapidly refine their models and investments in AI continue to climb, what comes next could have a major impact on the U.S. economy, national security, and innovation.
Here, Mark Dredze, the director of the Johns Hopkins Data Science and AI Institute, outlines what may lie ahead for the field in 2026 and what policymakers should keep in mind.
Where do you think we’ll see the most advances in AI in 2026?
I think we’ll see more of a shift towards applications of AI technology. We started to see this last year, and I think it will accelerate. What has characterized the early days of AI, especially the chatbot world, has been general-purpose models. People can try them out and say, “Oh, this seems like it’d really be helpful for whatever I work on,” but the gap between “could be helpful” and “actually making a meaningful impact” is significant. As the technologies have matured, there’s more interest in actually building these out for specific applications, so I think that’s where you’ll see work over the next year.
I think you’ll also see more focused uses of AI instead of an AI-for-everything approach. Some developments in the technology itself also support this. There is a lot of work on agents and agentic AI—that was the big, big thing last year—and agents lend themselves well to being customized and specialized. You see the same thing in developing small, application-focused models. So the flagship versions of models like GPT-5 or Gemini are these large, fairly expensive models where only a few companies can really compete in that space. But as you build them out for a specific application, you don’t need the model to do everything. So I think there will be more attention on building smaller, focused models that are more bespoke to specific tasks, moving beyond “This could be useful for what I do” to “This is really useful for the specific things that I do.”
What do you hope to see come to fruition over the next year for AI and autonomy, whether in the national security space or for civilian life?
Autonomy is an area where government is more involved than in other areas, so there’s a lot of engagement among different areas where autonomy will play an important role—especially national security, but also with industry more broadly. 2025 was a year where there was a review in government of how to move forward, and I think 2026 is a year we will start to see a lot of things happen.
Government engagement doesn’t necessarily mean regulating or passing laws though. There are many ways the government can engage, as we’ve seen with the recent launch of Project Genesis, which asks the Department of Energy to come up with a strategy for engaging industry in industry-government partnerships. Government is moving forward to define this space, and I think that autonomy will be a major focus.
What I’d love to see is that engagement include academics; figuring out how academics, industry, and government can work together is really key. Academics play a unique role in that we are not a company trying to make money, but we are a trusted third party that can weigh in. We understand the issues the government is facing. We understand the issues that companies face. But we represent society at large, so we’re trying to develop technologies and ensure the application of technologies in ways that benefit everyone. I’d love to see a partnership emerge where academics can play that trusted role in helping advise government about where the real opportunities are for developing effective and safe technologies that benefit everyone, while also making sure that those relationships are strengthening and utilizing the industry that we have in the U.S.
What aspect of AI do you think we’ll move away from in the coming year?
In the ’90s, when we were thinking about how computers were getting faster, everything was about how many megahertz the next chip would be. When we produced the 1-gigahertz chip, everyone celebrated it as a landmark for computing. When we passed the 2-gigahertz barrier, there was some celebration, but everyone understood that it was a little bit meaningless because the measurement in the game had changed. I have no idea how many gigahertz my current computer is. I think we’re turning that corner on AI. No one cares anymore about how many parameters a model uses or those kinds of benchmarks. We care much more now about what the technology is actually used for and what it is designed for. We’re moving away from this race of getting to GPT-3, -4, -5.
There was also a lot of initial excitement around building very lightweight technologies around these models. Those were easy things to do, and I think we will move away from those, too. I think the apps we use and the companies that come to market will focus less on this sort of low-hanging fruit of straightforward applications of the core technologies and much more on sophisticated uses of the technology, where domain and application expertise play a critical role.
What are the main things you think policymakers should keep in mind as they consider the funding, ethics and governance, and deployment of AI in 2026?
We need to think about how to regulate applications of AI. We have lots of experience regulating technologies for medicine, for example, and we need to figure out how those rules can be adapted to the nature of AI. This, critically, includes accountability: What will happen when things inevitably go wrong with an AI system?
Secondly, we need to figure out how to spread AI throughout the federal government. It’s like computing: Everyone in the government uses a computer. There’s a basic level of training everyone has. AI is moving in that direction. So how do we enable the average public servant to use AI in their work? What does that mean, what do we want to enable, and what limits do we want to place?
Third, we need to recruit top AI talent into government to change what government can do. This means asking how government can do what it does better—faster, more efficiently, more accurately—but critically, what can government do now that it never could before. The administration’s recently announced U.S. Tech Force is a great way of building a cohort of AI talent in government. Universities can have a unique role in training, preparing, and supporting that cohort.
Lastly, policymakers will absolutely have to think about energy and reckon with public attention on the expansion of data centers and rising energy costs in the coming year. What does the future of energy look like in the U.S.? This was an issue that came up first really around sustainable energy, and then it came up around electric cars. Now it’s coming up around AI. I don’t think it’s a problem caused only by AI, but it’s a problem that is being highlighted by AI.
This article originally appeared on the Johns Hopkins Bloomberg Center website »