Skip to Main Content

Artificial Intelligence: Practical Strategies for Practitioners

News Type Leadership News


Megan LePere-Schloop
Assistant Professor
Angie Westover-Muñoz
PhD Candidate and Program Manager, Program on Data and Governance

Artificial intelligence, particularly generative AI like ChatGPT and Dall-E, has recently received a lot of attention in the press.  

Proponents tout the potential of AI to improve efficiency by automating many organizational tasks including recruiting and screening job applicants, transcribing meeting notes and identifying action items, and even assigning risk scores to prioritize individuals for public programs and services. Others, including some high-level experts, warn of the potential dangers of AI over the short and long term such as perpetuating bias in historically discriminatory systems, decreasing human autonomy in decision-making, and even threatening human existence. Given the potential promise and peril of AI, it is important for public and nonprofit practitioners to consider the implications of AI for their organizations, employees, and the people they serve. AI for their organizations, employees, and the people they serve.  

This article identifies practical strategies for thoughtfully engaging with AI, whether you are looking for ways to simply become more informed or to connect and learn with other practitioners deploying AI for various purposes. 

Part I: Strategies to Become More Informed About AI

Educate yourself on AI fundamentals. 

Before being able to engage more deeply with AI, it is essential to have a foundational understanding of key concepts including artificial intelligence (general and narrow), rules-based or symbolic AI, and machine learning (supervised, unsupervised, reinforcement and deep).  

Attend trainings to deepen your foundational understanding of AI. 

While learning independently about AI fundamentals can be a powerful first step, it could also be helpful to deepen your understanding and ask clarifying questions in a more interactive environment.  


Gain a critical perspective on AI.  

In addition to building a foundational understanding of AI, it is important to think about this powerful new technology through a critical lens. The following books offer accessible and insightful critical perspectives on AI:  

  • “Weapons of Math Destruction” by Cathy O’Neil  

  • “The Age of AI: And Our Human Future” by Henry Kissinger, Eric Schmidt and Daniel Huttenlocher 

  • “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence” by Kate Crawford 


Identify AI use cases relevant to your organization.  

After building your foundational and critical understanding of AI broadly, it is important to think about how it might be usefully deployed in your organization.  


Stay informed about changes in AI governance. 

Currently, policymakers and regulatory agencies in the United States are beginning to take more concrete steps toward AI regulation, such as the White House Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights.” However, AI governance in the United States is predominantly approached as an organizational issue, meaning that tech companies and chief information and technology officers have great discretion. As AI governance in the United States continues to evolve, it is important to stay informed.  


Part II: Strategies to Become More Deeply Engaged With AI

Understand the risks of using AI. 

After identifying how AI can be used in your organization, it is important to assess associated risks. Keep in mind that risks can emerge at different stages of the adoption cycle and vary depending on your development or procurement strategy and the extent to which AI supports or replaces human decision-making.  


Assess your risk tolerance. 

Just because you could does not mean your organization should use AI. After understanding risks associated with AI, your organization must decide which are worth taking and under what conditions. Each organization must consider how its values, legal framework, internal processes and capabilities influence AI adoption decisions. Further, risk and regulation management frameworks, such as the recently approved EU AI act, and the UK Information Commissioner’s Office (ICO) guidance on AI and Data protection, emphasize the need to engage stakeholders when AI systems will be deployed for public service provision, even when systems are developed by private entities.


Develop responsible AI management structures and processes.  

Before deploying AI, your organization needs to develop processes and structures to detect and respond to issues early and effectively. Clear processes and structures allow your organization to reduce the potential harms of AI and promote transparency and accountability in developing and using these systems. Management approaches include but are not limited to requiring approval of new AI use by a unit or committee, creating checklists to ensure precautions are taken, providing training to those developing and managing AI systems, and creating incentives to catch-solve-escalate issues, etc.  


Foster a responsible AI culture.  

Generative tools like ChatGPT make it easy to use AI in innovative and flexible ways, but risks include privacy, intellectual property infringement, inaccuracy and error, among others. Traditional top-down control approaches are less effective when the organization cannot gatekeep access to these tools. Not everything can be written into policies and procedures or can be observed by a manager 100% of the time. In these gray areas, your organization can benefit from having a strong responsible AI culture.  

Ideally, everyone in your organization pays attention and plays a role in catching potential issues and critically thinking about the impact of their AI use on their organization and the community.