What Opportunities Does AI Bring to Public Service?
Doctoral Candidate Examines AI from Student, Instructor Perspective
Brandon Frye’s research and classroom instruction offer insight for peers and public service professionals.
“This is an opportunity for public policy and public management scholars and educators to help practitioners and society navigate the challenge of responsible AI integration,” said Associate Professor Megan LePere-Schloop, who serves on the Ohio State University AI Fluency Faculty Advisory Council that helps guide the ongoing implementation of the university’s AI Fluency initiative. “We are used to grappling with these kinds of wicked problems that require stakeholder engagement and learning. At the Glenn College, we’re involved in initiatives to develop new approaches to educating undergraduate students at Ohio State, train public and nonprofit practitioners in Ohio, and collaborate with Ohio State’s new National Science Foundation-funded Center on Responsible AI and Governance (CRAIG) to identify evidence-based AI governance best practices.”
AI on the Front Lines of Public Service
Google and CrowdStrike experts join the Glenn College via a panel in Washington, D.C., to map out the future of applied AI in government.
The college supports students’ and professionals’ AI fluency development through research, curricula, internships, college Town Halls, Washington Program events, the Leadership Forum for public service professionals and Battelle Center for Science, Engineering and Public Policy initiatives and courses.
Experts at the Glenn College offer their thoughts on the opportunities and challenges AI creates in the public sector.
Associate Professor Amanda Girth, director of the Glenn College’s Washington Programs, is a faculty affiliate on the leadership team for CRAIG, Ohio State’s Center for Responsible AI and Governance. She is a national expert in acquisition policy and practice.
What are some opportunities regarding the implementation of AI in public administration and public service now?
Artificial intelligence is no longer a future consideration; it is already shaping how federal agencies operate, secure systems and deliver public value. One of the clearest takeaways from the recent Glenn College AI Salon I moderated in Washington, D.C., featuring leaders from Google and CrowdStrike, was that we are beginning to understand the opportunities and the threats of AI, and we need to make sure policy keeps pace with technology. We must not only ensure responsible AI development and use but also remove barriers to innovation.
Agencies are using AI to analyze large volumes of data more quickly, detect cybersecurity threats in real time, modernize contracting and acquisition processes, and support frontline employees by reducing administrative burden and enabling higher-value work. This shift creates space for public servants to focus on judgment, accountability and ethical reasoning — areas where human expertise remains essential.
Assistant Professor Esra Gules-Guctas is a faculty member with a joint appointment at the John Glenn College of Public Affairs and the Translational Data Analytics Institute. Her research primarily centers around the intersections of law, technology and public policy, particularly in terms of algorithmic systems in public sector decision making.
What are a few policies that should be considered as AI is used in public administration or public service fields?
As agencies adopt more complex machine learning and generative AI tools, they need policies that keep decisions understandable, contestable and accountable. That includes clear rules for when AI may inform case decisions, what documentation must accompany an output, and how both public administrators and citizens can obtain a plain-language explanation of the basis for a system recommendation. For high-stakes determinations, agencies should require meaningful review pathways and ensure there is a clear route for verification and rebuttal when an output is disputed. Procurement policies matter here too. Agencies should build transparency requirements into contracts so system developers cannot shield critical logic, limitations, training conditions or performance constraints behind a “black box.”
What pitfalls should public service professionals keep watch for when they integrate AI into their use of data in their daily work?
AI does not simply mechanize existing processes; it also embeds legal and policy choices into technical systems. Public service professionals should look beyond performance metrics and confirm statutory and regulatory compliance before deployment.
Associate Professor David Landsbergen, who holds a JD, studies the intersection of law, technology, management and policy information and helped create the college’s new Cybersecurity Law, Policy and Management Graduate Certificate. Some of his earliest work focused on building AI expert systems and testing how people use these systems. His recent research is focused on information law and policy, examining the governance of “smart city” information, development and use of technical standards including metadata, barriers to interoperability, and information management to support decision making.
What are some opportunities regarding the implementation of AI in public administration and public service now?
We have all heard about the promised benefits of AI for society, including the public sector. However, at some point we have to understand what works. The public sector’s ability to realize these benefits depends, in part, on our subconscious theories on how we see innovation happening. “Technology-driven” change (the significant opportunities of a new technology will necessarily drive the change) is rarely explanatory. Instead, “sociotechnical” theories of innovation argue that both social and technological variables interact to define the kind of change that does take place.
The public sector IT literature reveals that information technology, which includes AI, is promoted for its potential to improve service or decision making. But if new IT projects are funded, it is usually because they can demonstrate efficiencies through cost savings. This tells us where the opportunities will appear and how the innovation will take place. Information technology projects that can be completed in a shorter time horizon avoid the destabilizing nature of political, economic and technological cycles. This argues for a more modular and staged approach to building capacity. Interoperability, where data is shared across organizational divisions, is always extremely difficult to enact because it has a whole host of its own governance issues. AI opportunities with high demands for data may encounter more difficulties where interoperability issues must be addressed first.
Associate Professor Megan LePere-Schloop serves on the Ohio State University AI Fluency Faculty Advisory Council, which helps guide the ongoing implementation of the university’s AI Fluency initiative. She teaches students and professionals about AI in public management and conducts research on AI governance and integration in public and nonprofit organizations.
What are some opportunities regarding the implementation of AI in public administration and public service now?
A lot of the current discourse frames AI as a tool or technology, which focuses our attention on questions about whether AI should be used for certain tasks or what barriers prevent individuals or organizations from adopting AI. While these questions are important, I don’t think they are the only ones we should be asking. We need to learn from our past experiences with other disruptive technology. In retrospect, it seems too simplistic to think of the internet and social media as just tools. They reshaped the way we interact with each other and process information, contributing to issues like social isolation and political polarization.
An alternative framing is to think about AI as a sociotechnical system. This sounds complicated, but the idea is simply that while people are making choices that shape when and how AI is used, AI is also reshaping the way people think and behave. This focuses our attention on the impact of human-AI interactions over time. For example, new research suggests that we may lose cognitive capacity and expertise when we offload tasks to AI, which suggests that we need to think about how to use AI responsibly so that we can safeguard our critical thinking abilities. This echoes the concerns that I’m hearing from practitioners across the public, nonprofit and for-profit sectors about the need for guidance on how to integrate AI in a responsible way so that we can avoid or mitigate negative consequences.
New Certificate in Cybersecurity Law, Policy and Management
The Glenn College and the Moritz College of Law launched the certificate to help professionals build cybersecurity systems that meet technical, legal and organizational demands.
In addition, our professional development arm has been responsive to public sector partitioners and elected officials in Ohio looking for guidance on AI integration. For two years, we’ve offered an introduction to AI training, and last year we began offering a responsible AI management course. These professional development trainings help practitioners identify practical steps to integrate AI responsibly in Ohio public organizations.
Read the latest edition of Public Address, the Glenn College magazine.