Generative AI and National Security: A Conversation with Secretary Napolitano

Photo credit: Elena Zhukova
Over the next few months I'll be sitting down with luminaries in different fields to get a view of how AI and generative AI are, and are predicted to impact different sectors of our lives. I am delighted to start this series with a conversation with Secretary Janet Napolitano on the topic of national security.
This interview has been edited for length and clarity.
HF: Before walking over to your office, I asked ChatGPT to "tell me about Janet Napolitano." This was the response:
Janet Napolitano is a highly accomplished American public servant, lawyer, and academic administrator. Here are some key points about her:
U.S. Secretary of Homeland Security (2009–2013): Appointed by President Obama, she was the first woman to hold this position and oversaw major initiatives in aviation security (like TSA PreCheck®), border security, and cybersecurity.
Governor of Arizona (2003–2009): As Arizona’s 21st governor, she was the first woman elected to the position and focused on improving education, the economy, and the environment.
Attorney General and U.S. Attorney for Arizona: Earlier in her career, she served as the state’s Attorney General and as the U.S. Attorney for the District of Arizona.
President of the University of California (2013–2020): In this role, she was a staunch advocate for affordable education and led initiatives in climate action and student support.
(Secretary Napolitano points out that she was the third woman governor of Arizona. The first two were secretaries of state who succeeded to office when the governor had to vacate, a technicality that ChatGPT missed.)
HF: How is it doing so far?
JN: Sounds okay, but it makes it sound like I can't keep a job.
HF: (laughs) Let's keep going:
Background and Education: Born on November 29, 1957, in New York City and raised in cities like Albuquerque and Pittsburgh, Napolitano earned her B.S. in Political Science (summa cum laude) from Santa Clara University and her J.D. from the University of Virginia School of Law.
Legacy and Current Work: Beyond her public service career, Napolitano has continued to influence public policy as a professor at UC Berkeley’s Goldman School of Public Policy and through her work on various boards and advisory roles. Her leadership and innovative approaches — such as launching DACA-related initiatives — have made her a respected voice in discussions about homeland security, immigration, and education. These contributions have not only broken many glass ceilings as the first woman in several high-profile roles but have also helped shape key policy areas in the United States.
HF: Was anything factually incorrect?
JN: It may be a bit laudatory, but no mistakes.
HF: This seems to me a big improvement over even a year ago, when these large language models (LLMs) made a lot more mistakes.
JN: I agree.
HF: You and I have spoken quite a bit over the past few years about AI and generative AI and their impact on law enforcement and national security. I want to start by asking you what you are seeing today as the biggest impacts in this world.
JN: In the homeland security, intelligence community, and law enforcement worlds, they know AI is a big thing, they know it is evolving, they know it is a tool they want and need, but they are still struggling to figure out — in my view — what its best uses are and how to incorporate it in the most beneficial way. There are a lot of task forces and committees and people writing draft guidelines, the kind of things you typically see when you're at the beginning of something.
HF: Can you describe some use cases that are being discussed?
JN: AI has many possible uses. One is the ability to sift through reams of data. Like, for example, CBP (Customs and Border Protection) and TSA know everybody that's flying everywhere. But sifting through all that in real time to see persons of interest or downstream patterns is incredibly difficult when you have 20 million people flying around the world every day.
The intelligence community has so much data, but as the data haystack has gotten bigger and bigger, the analytic tools to find needles haven't kept up. That's where AI is viewed as a powerful new tool. But how you incorporate it and use it wisely and ethically is challenging.
HF: And so is the stumbling block that the technology is still nascent, or is it that the government moves slowly?
JN: It's a little bit of both. Government wants to use [AI] well and ethically, but there's no stasis. It's like, well, here's what we have today, but we don't know what we're going to have six months from now. And, of course, policymakers don't really understand the technology. And they don't understand where the technology really is, how fast it really can develop, how it can scale. They're still trying to get their heads around the internet.
HF: Let's look forward to three years from now and try to put aside the political upheaval that we're undergoing. In your long experience in the government and with big bureaucracies like the University of California system, do you see the government getting there in three years?
JN: Not with existing systems. No. First of all, we're going to see a great brain drain from the government, and the people that are going to drain are precisely the people we need on this. That is just a consequence of what's happening in today's political climate. The second thing is, there are different interests in the government with different entities in the government wanting AI for different things. President Biden issued a pretty extensive executive order on AI to try to put some initial discipline [around it]. But that's now been withdrawn. So it's really the Wild West out there. Right now, I don't see AI being incorporated well into the government.
HF: Beyond domestic policies, how should we be thinking about this on the international stage?
JN: It's hard to say at this point in time. Ideally, you would have a set of negotiations with the major adversarial powers that would put some agreed-upon limits on the use of AI. This may be one of those situations where, unfortunately, that won't happen until a true crisis manifests itself. In the meantime, it's a great global competition.
HF: Looking more broadly than national security, what are you most excited about?
JN: The applications in medical research are immense, and advances there could just make the world a better, healthier place. I also think we should be doing more exploration of the uses of AI in developing countries that don't have capacity to build transportation, healthcare, and security systems. AI can be an equalizing force. Maybe with fewer entrenched institutional bureaucracies, developing nations will be able to move faster.
We saw something similar with the mobile phone revolution. We here in the US were slow to get good mobile coverage, but developing parts of the world went right to it because they didn't have legacy systems to contend with. Maybe it'll be a similar thing with AI where they have somewhat of an advantage.
HF: What are you most scared about with AI, given what you have seen?
JN: Well, the doomsday predictions give everybody pause. The capacity of AI to outthink humans, the capacity of AI to conduct war, the capacity of AI to fill the world's information ecosystem with false, misleading, and deleterious information. The asymmetric uses of AI — those are all very worrisome.
HF: When it comes to AI, it seems to me that people fall into one of two categories with a little bit of wiggle room in the middle. On one end are the doomers who see this dystopian future where AI destroys humanity. On the other end are Boomers who see AI as a panacea that will make our lives and world better. Where are you on this spectrum?
JN: I think it is premature to take a hard position. There's a lot to be excited about and there's a lot to be worried about. One of the things I do worry about is economically how concentrated AI power is becoming in a few large corporations – I call them corporate states. Their economies are as large as most countries. They have enormous international reach. They have an enormous impact on the US economy. It's very oligarchical.
HF: When I emailed you asking for this interview, as a way to pique your interest, I sent you an AI-generated image of you walking through the woods carrying an assault rifle. I think I said, "I always knew you were a badass, but oh my God." What did you think of the image?
JN: You were the second person that week to call me a badass.
HF: As you know, most of the AI models that generate content like this image have been trained on billions of pieces of content scraped from all over the internet. The big tech and big AI companies are saying that this is fair use, while content creators are crying foul. What is your take?
JN: I think it does come down to fair use and a lot of copyright issues are fair use issues. I actually wrote briefs for the Supreme Court when I was in private practice on a fair use issue. It will be interesting if the courts hold that it's not fair use. That will really change the economic model underlying these companies.
HF: And they may have to start from scratch. I think that would be extreme. But there is a story here where if they were found to be infringing, there's no way of...
JN: You can't unring the bell.
HF: You can't. And then if they do start over, what's the economic model?
JN: Right. Well, they've got to pay for their New York Times. We do. I think you should have to pay for content.
HF: Thank you for sitting down with me and sharing your thoughts.
JN: This has been lovely.