Skip to Main Content

What Opportunities Does AI Bring to Public Service?

News Type Public Address

While AI provides solutions to challenges in the public sector, its development and use raise just as many questions as answers. 

The Glenn College is helping students and public service professionals understand how to use AI in the field and view it in light of sociotechnical systems, where AI is not just a technical tool but also has a social impact on the people who use them and are affected by them. 

Doctoral Candidate Examines AI from Student, Instructor Perspective

Brandon Frye’s research and classroom instruction offer insight for peers and public service professionals. 

“This is an opportunity for public policy and public management scholars and educators to help practitioners and society navigate the challenge of responsible AI integration,” said Associate Professor Megan LePere-Schloop, who serves on the Ohio State University AI Fluency Faculty Advisory Council that helps guide the ongoing implementation of the university’s AI Fluency initiative. “We are used to grappling with these kinds of wicked problems that require stakeholder engagement and learning. At the Glenn College, we’re involved in initiatives to develop new approaches to educating undergraduate students at Ohio State, train public and nonprofit practitioners in Ohio, and collaborate with Ohio State’s new National Science Foundation-funded Center on Responsible AI and Governance (CRAIG) to identify evidence-based AI governance best practices.” 

John Glenn College of Public Affairs Assistant Professor Esra Gules-Guctas talks to her students about AI in government decision-making during her Law and Public Affairs course. (Credit: Lily Li)

AI on the Front Lines of Public Service 

Google and CrowdStrike experts join the Glenn College via a panel in Washington, D.C., to map out the future of applied AI in government. 
 

The college supports students’ and professionals’ AI fluency development through research, curricula, internships, college Town Halls, Washington Program events, the Leadership Forum for public service professionals and Battelle Center for Science, Engineering and Public Policy initiatives and courses. 

Experts at the Glenn College offer their thoughts on the opportunities and challenges AI creates in the public sector. 

Developing Responsible, Ethical AI and Governance

Associate Professor Amanda Girth

Associate Professor Amanda Girth, director of the Glenn College’s Washington Programs, is a faculty affiliate on the leadership team for CRAIG, Ohio State’s Center for Responsible AI and Governance. She is a national expert in acquisition policy and practice.

What are some opportunities regarding the implementation of AI in public administration and public service now?

Artificial intelligence is no longer a future consideration; it is already shaping how federal agencies operate, secure systems and deliver public value. One of the clearest takeaways from the recent Glenn College AI Salon I moderated in Washington, D.C., featuring leaders from Google and CrowdStrike, was that we are beginning to understand the opportunities and the threats of AI, and we need to make sure policy keeps pace with technology. We must not only ensure responsible AI development and use but also remove barriers to innovation.  

Agencies are using AI to analyze large volumes of data more quickly, detect cybersecurity threats in real time, modernize contracting and acquisition processes, and support frontline employees by reducing administrative burden and enabling higher-value work. This shift creates space for public servants to focus on judgment, accountability and ethical reasoning — areas where human expertise remains essential. 

What specific focus areas will CRAIG examine regarding responsible artificial intelligence and governance?

The Center for Responsible AI and Governance (CRAIG) is focused on a core challenge facing government today: how to deploy AI in ways that are effective, ethical and accountable, which is especially important in high-stakes public sector environments. As a faculty affiliate on the leadership team for CRAIG, my role centers on building relationships with federal partners, particularly in the national security arena, where AI is already deeply embedded in mission-critical systems. These contexts underscore why CRAIG’s work matters. National security organizations face intense pressure to adopt AI quickly, but they must do so while maintaining trust, accountability and control over decision making. Together, this work positions CRAIG and Ohio State as a bridge between research and practice, ensuring that advances in AI strengthen public institutions rather than outpacing them.  

Embedding Legal and Policy Decisions in AI Implementation

Assistant Professor Esra Gules-Guctas

Assistant Professor Esra Gules-Guctas is a faculty member with a joint appointment at the John Glenn College of Public Affairs and the Translational Data Analytics Institute. Her research primarily centers around the intersections of law, technology and public policy, particularly in terms of algorithmic systems in public sector decision making.

What are a few policies that should be considered as AI is used in public administration or public service fields?

As agencies adopt more complex machine learning and generative AI tools, they need policies that keep decisions understandable, contestable and accountable. That includes clear rules for when AI may inform case decisions, what documentation must accompany an output, and how both public administrators and citizens can obtain a plain-language explanation of the basis for a system recommendation. For high-stakes determinations, agencies should require meaningful review pathways and ensure there is a clear route for verification and rebuttal when an output is disputed. Procurement policies matter here too. Agencies should build transparency requirements into contracts so system developers cannot shield critical logic, limitations, training conditions or performance constraints behind a “black box.”

 

What pitfalls should public service professionals keep watch for when they integrate AI into their use of data in their daily work?

AI does not simply mechanize existing processes; it also embeds legal and policy choices into technical systems. Public service professionals should look beyond performance metrics and confirm statutory and regulatory compliance before deployment. 

Encoding eligibility rules or program requirements is as much a matter of legal interpretation and policy intent as it is a coding task. Technical performance metrics can show how well a system matches target outputs on available evaluation data, but they only capture what is represented in the dataset. A model can score highly on technical accuracy while still producing outputs that conflict with statutory criteria and create legal risk and potential liability for government agencies. This can happen when data schemas do not align with statutory definitions, when legally relevant factors are omitted or poorly represented, or when a model’s features and thresholds distort legally significant concepts. 

The same features that make AI attractive, such as handling large, messy datasets, can also amplify data quality problems, flawed design assumptions and drift as data and real-world conditions change over time. That is why ongoing monitoring and clear accountability for errors are essential in daily practice.

Protecting AI Innovation and Cybersecurity

Associate Professor David Landsbergen

Associate Professor David Landsbergen, who holds a JD, studies the intersection of law, technology, management and policy information and helped create the college’s new Cybersecurity Law, Policy and Management Graduate Certificate. Some of his earliest work focused on building AI expert systems and testing how people use these systems. His recent research is focused on information law and policy, examining the governance of “smart city” information, development and use of technical standards including metadata, barriers to interoperability, and information management to support decision making.

What are some opportunities regarding the implementation of AI in public administration and public service now?

We have all heard about the promised benefits of AI for society, including the public sector. However, at some point we have to understand what works. The public sector’s ability to realize these benefits depends, in part, on our subconscious theories on how we see innovation happening. “Technology-driven” change (the significant opportunities of a new technology will necessarily drive the change) is rarely explanatory. Instead, “sociotechnical” theories of innovation argue that both social and technological variables interact to define the kind of change that does take place.

The public sector IT literature reveals that information technology, which includes AI, is promoted for its potential to improve service or decision making. But if new IT projects are funded, it is usually because they can demonstrate efficiencies through cost savings. This tells us where the opportunities will appear and how the innovation will take place. Information technology projects that can be completed in a shorter time horizon avoid the destabilizing nature of political, economic and technological cycles. This argues for a more modular and staged approach to building capacity. Interoperability, where data is shared across organizational divisions, is always extremely difficult to enact because it has a whole host of its own governance issues. AI opportunities with high demands for data may encounter more difficulties where interoperability issues must be addressed first.

How is AI changing the cybersecurity landscape? 

AI poses new challenges for cybersecurity. Most of the vulnerabilities in cybersecurity are human-centered. Someone forgets to upgrade a software or lets someone else “borrow” a password so that the work can continue. Especially problematic are the phishing attempts that lure people into clicking on a nefarious link or giving a password that they should not be giving. AI is now becoming more sophisticated in creating these phishing attempts. Combined with access to all kinds of personal information, AI can now impersonate someone so well that it convinces the target to provide the sensitive information.

Considering the Human Factor in AI

Associate Professor Megan LePere-Schloop

Associate Professor Megan LePere-Schloop serves on the Ohio State University AI Fluency Faculty Advisory Council, which helps guide the ongoing implementation of the university’s AI Fluency initiative. She teaches students and professionals about AI in public management and conducts research on AI governance and integration in public and nonprofit organizations.

What are some opportunities regarding the implementation of AI in public administration and public service now? 

A lot of the current discourse frames AI as a tool or technology, which focuses our attention on questions about whether AI should be used for certain tasks or what barriers prevent individuals or organizations from adopting AI. While these questions are important, I don’t think they are the only ones we should be asking. We need to learn from our past experiences with other disruptive technology. In retrospect, it seems too simplistic to think of the internet and social media as just tools. They reshaped the way we interact with each other and process information, contributing to issues like social isolation and political polarization.

An alternative framing is to think about AI as a sociotechnical system. This sounds complicated, but the idea is simply that while people are making choices that shape when and how AI is used, AI is also reshaping the way people think and behave. This focuses our attention on the impact of human-AI interactions over time. For example, new research suggests that we may lose cognitive capacity and expertise when we offload tasks to AI, which suggests that we need to think about how to use AI responsibly so that we can safeguard our critical thinking abilities. This echoes the concerns that I’m hearing from practitioners across the public, nonprofit and for-profit sectors about the need for guidance on how to integrate AI in a responsible way so that we can avoid or mitigate negative consequences.

What is the Glenn College doing to prepare future public servants in the age of AI?

For over two years, AI has been a central focus of the course I teach on public management, which is required for our undergrad major and minor degrees. My students choose a specific public sector AI use case to research and assess through a series of deliverables. For example, last semester, my students researched New York City’s MyCity Chatbot; the FDA’s generative AI tool, Elsa; and the Allegheny Family Screening Tool, among other use cases. They examined how the AI was developed and implemented and how different stakeholder groups were affected, and they did an assessment based on the AI Risk Management Framework developed by the National Institute for Standards and Technology. These undergraduate students are going into entry-level positions with this foundational knowledge of AI as a sociotechnical system; they are prepared to think about and use established frameworks to assess AI risks.

New Certificate in Cybersecurity Law, Policy and Management 

The Glenn College and the Moritz College of Law launched the certificate to help professionals build cybersecurity systems that meet technical, legal and organizational demands. 

In addition, our professional development arm has been responsive to public sector partitioners and elected officials in Ohio looking for guidance on AI integration. For two years, we’ve offered an introduction to AI training, and last year we began offering a responsible AI management course. These professional development trainings help practitioners identify practical steps to integrate AI responsibly in Ohio public organizations.

Read the latest edition of Public Address, the Glenn College magazine.