Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Equitable Government and the Limits of AI

Artificial intelligence has potential, but it can’t replace simple, reliable technology solutions and the human touch. And there’s a risk that it will automate existing inequities instead of alleviating them.

District Direct mobile app screenshot
The District Direct mobile app allows Washington, D.C., smartphone users to apply for public benefits and manage their accounts. It's an example of the simple, reliable technology that has helped governments reduce inequality in service delivery.
Over the past decade, there has been transformative progress in government when it comes to using technology to reduce inequities in service delivery. In April, our organization, Code for America, released original research in its Benefits Enrollment Field Guide, finding that 77 percent of state safety net programs now have online applications and that 52 percent of state benefits websites are mobile-responsive.

This is a vast improvement from just four years ago. But as more local governments, states and federal agencies embrace the power of technology, some are looking to the capabilities of artificial intelligence to increase efficiency and reach their goals. Amid warnings about the emerging technology, this raises a serious question: What role, if any, should AI play in delivering critical services like food assistance, health care and tax benefits?

The reality is that AI can’t replace the human touch when it comes to making government more accessible and equitable for all. The ability to understand complex barriers and implement meaningful, ethical solutions remains unique to people. In fact, there’s a very real risk that AI will automate existing inequities instead of alleviating them.

Look no further than when an AI-enabled tool was deployed in Pennsylvania to help overburdened social workers decide which families to investigate for child welfare concerns. A machine-learning algorithm used data, including disability data, to calculate the risk of the state agency eventually placing a child in foster care. Critics argued that families dealing with disabilities and mental- issues were unfairly targeted, and the tool ended up drawing intense scrutiny from the U.S. Justice Department.

However, it would be unreasonable to suggest that AI has no place in government at all.

For example, Code for America has used a machine-learning technique called “entity resolution” to match and link data records across large and messy data sets in both our safety net and criminal record clearance work. In this case, AI offers a more sophisticated approach that produces more accurate results than other methods.

Chatbots offer another example. This form of AI can help handle customer support questions when there is high confidence that an answer can be provided without human intervention. And amid the explosion of generative AI, it is worth exploring how large language models might be used in an intentional and ethical way.

Before turning to AI, however, there are significant steps that state, local and federal policymakers and technology leaders can and should take to make government services more equitable.

First, some of the most pressing challenges don’t need AI to be solved. Simple, reliable technology solutions, like mobile-first online benefits applications that are accessible, easy to use and available 24/7, can ensure that access to government services is more equitable. Government leaders should not lose sight of prioritizing these relatively simple yet impactful changes as they seek to understand the potential applications of AI.

Second, decision-makers should hire people with the lived experience of interacting with government programs. These people bring indispensable knowledge of the barriers, seen and unseen, that people face. AI has the potential to improve efficiency in delivering solutions, but it will also carry the values and goals of its creators and must be oriented around equitable outcomes.

Finally, any effort to make government services more equitable must include design principles that put people first. Often called human-centered design, this means putting the users, their needs and experience first when building any government system. It also means continuously improving systems to ensure positive long-term outcomes.

Good design takes hard work, but every day public servants across the country and the broader civic technology community are showing that it’s possible to make government services more user-friendly and more equipped to reach everyone — especially those who’ve been left behind.

Still not convinced? Don’t take my word for it. Here’s ChatGPT’s take on using AI in government:

“AI cannot replace humans entirely. The most effective approach is a combination of AI and human services, with AI handling routine tasks, and humans providing the personal touch, expertise and accountability needed for more complex tasks.”

On this, we certainly agree.

Lou Moore is the chief technology officer at Code for America.



Governing's opinion columns reflect the views of their authors and not necessarily those of Governing's editors or management.
From Our Partners