State and local governments are deploying AI to forecast crime, streamline benefits, optimize emergency response, modernize hiring and improve education. Oklahoma’s Department of Education, for example, recently introduced model AI use policies for schools — an attempt to integrate innovation while protecting student privacy and equity. In Texas, where nearly 12 percent of children remain uninsured and health systems are increasingly reliant on AI-assisted record-keeping, the stakes of algorithmic fairness are particularly high. And in Mississippi, where the state recently declared a public health emergencyover infant mortality, gaps in health-care data illustrate the risk that under-resourced systems can feed biased or incomplete AI models.
In each of these cases, the question is not whether AI can make government more efficient; it’s whether those systems will reinforce the inequities that already exist. For public administrators, this is not just a technical issue but a matter of data governance and accountability. The same diligence applied to budgeting, procurement and ethics must now extend to algorithms.
When an AI system denies a family housing benefits or flags a neighborhood for unnecessarily increased policing, the issue isn’t simply faulty code. Bias often arises because agencies lack representative data, bias-testing protocols or oversight frameworks. For state and local governments, this is where leadership matters most. Cities don’t need to become technology companies — they need clear AI governance structures, transparent procurement rules and partnerships with universities and civic tech labs to ensure that systems are tested for fairness before deployment. States can go further by establishing centralized AI policy offices that coordinate standards, provide smaller municipalities with shared expertise and align data practices across agencies.
The consequences of weak oversight are well documented. The COMPAS risk assessment tool used in several states, for example, labeled Black defendants “high risk” far more often than white defendants charged with similar crimes. A health-care algorithm underestimated the needs of Black patients because it was trained on costs instead of health outcomes. For governments, the takeaway is clear: Agencies should act now by embedding fairness, transparency and accountability into every stage of AI use:
• Adopt structured frameworks. The National Institute of Standards and Technology’s AI Risk Management Framework and the ISO/IEC 42001 international standard provide tested templates for identifying and mitigating risk.
• Establish review councils. States can convene AI advisory boards to evaluate high-impact systems before deployment, ensuring that local context is reflected and equity standards are met.
• Modernize data infrastructure. Mississippi’s health-care challenges highlight how incomplete or outdated data sets undermine fairness. Investment in inclusive, high-quality public data is the foundation of trustworthy AI.
• Govern through procurement. Require vendors to provide algorithmic impact assessments, explainability documentation and fairness-testing results before contracts are signed.
• Strengthen intergovernmental collaboration. By sharing audits, governance templates and oversight tools, smaller local governments can adopt safeguards without duplicating costs.
Strong AI governance is not a brake on innovation — it’s what makes innovation credible. Data integrity ensures that systems reflect the communities they serve. Transparency builds public trust when citizens can see how decisions are made. Human oversight, written into policy, keeps algorithms aligned with democratic values even as they evolve.
Yes, defining fairness across populations and balancing innovation with oversight remain challenges. But state and local governments already lead in adapting governance for new technologies, from cybersecurity to privacy. The same blueprint can apply to AI: Start with standards, codify accountability and scale through collaboration.
Artificial intelligence will not decide our collective future; our design choices will. For governments, that means building the governance infrastructure to prevent bias before it’s built in. By adopting transparent standards, coordinating across agencies and demanding accountability from vendors, government leaders can ensure that AI serves as a force for fairness — not as a mirror of historical inequality. The opportunity is clear: Use technology not just to modernize government, but to make it more just.
Selena Singh is an artificial intelligence governance researcher and master’s degree candidate focused on ethical and responsible AI innovation.
Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.
Related Articles