Skip to Main Content

AI on the Front Lines of Public Service

News Type Public Address

Associate Professor Amanda Girth, left, leads the AI Salon in Washington, D.C., with panelists Adam Meyers of CrowdStrike (center) and Scott Deutchman of Google.

Artificial intelligence is one of the most transformative forces shaping government and public service today.  

AI Salon Panelists

Scott Deutchman, Google
Scott Deutchman
Google
Adam Meyers, CrowdStrike
Adam Meyers
CrowdStrike
Amanda Girth, Glenn College
Amanda Girth
Glenn College

“This is not a distant horizon; it is a daily reality in how agencies operate, how policies are crafted, and how the public is served. Across the federal landscape, AI is already helping to analyze vast datasets, improve logistics, modernize contracting and strengthen cybersecurity,” said John Glenn College of Public Affairs Dean Trevor Brown. “But it is also challenging us to think carefully about ethics, transparency and the human elements of decision making. As many fear the loss of jobs from AI, we have to learn how to harness its power for the common good.” 

To explore the mission-critical work of implementing AI, the Glenn College hosted the AI Salon, a conversation with AI industry leaders in Washington, D.C. 

Scott Deutchman, head of U.S. AI strategy in government affairs and public policy at Google, and Adam Meyers, senior vice president of counter adversary operations at CrowdStrike, a global cybersecurity provider, shared their expertise on the tools and strategies being used now to solve concrete challenges and improve operations in the federal government and supporting industries. 

What Opportunities Does AI Bring to Public Service?

Glenn College faculty share AI expertise for students and professionals. 

“These leaders are defining what applied AI looks like in practice,” said Associate Professor Amanda Girth, director of  Washington Programs at the Glenn College, who moderated the salon. “The conversation moved past hype to focus on what it actually takes to implement AI responsibly in public service, from building trust to aligning skills and tools with real mission needs. For our students and public servants alike, the takeaway was clear: Applied AI is about improving how government serves the public interest.” 

Their insights provide AI guidance for not just federal agencies but also for public service professionals writ large. Here’s a sampling of what they shared. 

Where We Are Now: Federal Implementation of AI

Both speakers noted that while AI use is nascent in federal government, developments are moving very quickly. 

“We can talk about all of the things that the federal government and our society can benefit from when it comes to AI. But what I would say is, what the government is doing well is acknowledging that opportunity,” Deutchman said. “What the government can be doing is building trust in these tools, adopting these tools, experimenting with them and creating that flywheel of use cases that have them coming back and accelerating the use of AI for all of these reasons. It’s exciting to be on the front end.” 

Sharing AI Success Cases

Adam Meyers of CrowdStrike speaks with students in the Glenn College Washington Academic Internship Program (WAIP).

Adam Meyers of CrowdStrike speaks with students in the Glenn College Washington Academic Internship Program (WAIP).

Meyers said from his perspective at CrowdStrike, AI helps cybersecurity experts keep up with ever-increasing data associated with threats. 

“If we had this discussion eight months ago, my team was looking at 4.7 trillion events per day — 55,000,000 events per second. Today it’s at 5.7 trillion events per day and 65,000,000 events per second, so just in the course of about eight months that’s how much more data we’re looking at,” Meyers said. “And defenders are really struggling to keep pace with all the tools. We have something that we call Threat AI, which is our vision for how we do threat intelligence and security using AI.” 

Threat AI helps cybersecurity analysts quickly switch their many hats, including malware analysis, threat modeling and incident response.  

Scott Deutchman of Google says employees carve out time to play with AI to get comfortable with it and keep up with latest developments.

Scott Deutchman of Google says employees carve out time to play with AI to get comfortable with it and keep up with latest developments. 

“It gets really expensive, and when you have to do it faster and more frequently every day, it starts to break a lot of the models that existed for the human,” Meyers said. “By bringing AI in, now we can offload that context switching, and we could offload a lot of that work.” 

Deutchman gave an example of a solution Google found for health care. After the Department of Veterans Affairs released a report that 1.2 million veterans are being misdiagnosed with cancer every year, Google worked with the VA to create an Augmented Reality Microscope (ARM). The microscope improved the results of screenings exponentially, so much so that the VA now uses the ARM worldwide at its facilities. 

Putting AI to Work in Your Organization

Deutchman suggested that public service leaders start by identifying something small, achievable, measurable and that hopefully will have an impact, and then that creates the flywheel of accelerating adoption. 

Meyers begins by asking his team to find the part of their job they hate — the thing that they don’t want to do — and figure out how to use AI to automate that.  

That’s ultimately the way to get people energized and thinking about this.  

Adam Meyers
CrowdStrike

For example, the lawyers on his team told him the process of reviewing contracts for new services is tedious because it ends up with seemingly endless changes.

By building an AI chatbot trained on previous contracts as well as the company’s policy and business information, they’ve automated changes based on the right or best decision for CrowdStrike.  

Getting Employees on Board

The panelists agreed that carving out time for experimentation is the best way to help employees feel comfortable using AI.  

AI is a tool that you will use in your career, which I think for a lot of people — most people — will be inevitable. 

Scott Deutchman
Google

“I just sat down with the leadership team the other day, and I said, ‘Don’t think about how you have to use AI differently. Think about how you break that content differently so that that content is more consumable by AI,’” Meyers said, adding that then those changes can be scaled up. “Then when something fast-breaking happens, it has the ability to write that reporting for you, and then you can just sign off on it. You can focus on your research.” 

Deutchman and Meyers also noted that leaders and employees need continuing education to both keep up with the latest developments and watch for nascent technology that might become the next big advancement. 

Leadership in the AI Era

Leadership associated with AI, the panelists agreed, isn’t that much different than leadership in general. 

Doctoral Candidate Examines AI from Student, Instructor Perspective 

Brandon Frye’s research and classroom instruction offer insight for peers and public service professionals. 

“It’s thinking about AI creatively as a leader and setting an example,” Deutchman said. 

“Effective leaders need to drink their own champagne, and they have to test,” Meyers said. “They have to — as you are doing with the Glenn College students — get employees to be AI fluent or knowledgeable,” Meyers said. “You have to get hands on the keyboard. You have to understand how the technology works, what it’s capable of and what its limitations are, and then apply that as you’re leading a team and managing an organization so that you have realistic expectations and realistic capabilities.” 

Looking Ahead

A successful future with AI will require organizations to be fast and agile, and government needs to give thoughtful consideration to regulations, the panelists noted. 

“It’s very clear to us that AI is too important not to regulate, and it’s too important not to regulate well. So what does that mean? It means you should be looking at the outputs — what AI is actually doing — to determine whether or not there’s a regulation that’s reasonable to apply. Where are the gaps in regulation and in policy?” Deutchman said. “And it needs to be risk-based. For some app that lets me pick out my next suit? You probably don’t need a lot of regulation.” 

New Certificate in Cybersecurity Law, Policy and Management

The Glenn College and the Moritz College of Law launched the certificate to help professionals build cybersecurity systems that meet technical, legal and organizational demands. 

Meyers called AI an arms race as well as a national security, public safety and public health issue, and he fears lengthy discussions about regulation are taking time away from keeping up with the technology.   

“Where I would like to see us go in the next 12 months is to go faster and to be able to be more agile in looking at AI: How can we adopt AI? Where can it be used across the federal government? How can we enable the use of AI rather than constraining it?” Meyers said. “If we start constraining it, I think we’re going to run into situations where we’re behind China and other countries.” 

Read the latest edition of Public Address, the Glenn College magazine.