Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Biased AI’s Challenges for Government Leaders

Artificial intelligence platforms have flaws with serious class, gender and race implications. Public officials need to pay more attention to those biases and do what they can to prevent harm.

Facial recognition screen
(Shutterstock)
Governments are racing to roll out artificial intelligence technology, aiming to expand and improve public services while making them more efficient and less costly. But although much has been written about AI’s potential threats to privacy and civil liberties, public officials have paid too little attention to its negative impacts on minority communities, or to what can be done to ensure that the technology does not exacerbate our racial divides.

I have spent much of my life working as a public official and teaching and writing about the intersection of race, technology and politics. Here’s what I believe government leaders should think about, and perhaps act on, as they roll out AI platforms.

First, they must be willing to face the reality that AI — increasingly used by governments in public safety, hiring and recruiting, and data analysis, among other things — contains biases that have serious class, gender and race implications. Those prejudices are in the DNA of AI, and addressing them entails more than conducting periodic audits of AI and posting vague statements about commitments to technological equity and justice. It requires proactive approaches by government leaders to ensure that their administrations do not overly rely on AI apps that we know contain these biases.

We know that many of the systems that AI applications are based on are flawed. As Stephanie Dinkins, an artist who was awarded a $100,000 grant by the Guggenheim Museum for her research on robots powered by AI, told a New York Times reporter, “The biases are embedded deep in these systems, so it becomes ingrained and automatic. If I’m working within a system that uses algorithmic ecosystems, then I want that system to know who Black people are in nuanced ways.” So should the mayor and chief of police in a city using facial recognition AI.

Other professionals working in the field of AI, particularly women and those from minority communities, have pointed out similar problems, and some have encountered a hostile response. Timnit Gebru, a Black Stanford University graduate who worked for Google, claims she was forced out after she co-authored a research paper on bias in the AI system that underpins Google’s search engine. Margaret Mitchell, a Google colleague of Gebru’s and a co-author of the research paper, defended Gebru and also left the company after heading up its ethics in AI division. Earlier, when she worked at Microsoft, she had made waves and drawn a heap of publicity when she referred to biases in AI as a “sea of dudes” problem for her male-dominated profession.

No problem related to AI and racism has garnered more negative attention than in 2015, when a Google photo app mislabeled African Americans as gorillas. Two former Google employees who had worked on the app said the image collection used to train the system had included too few photos of African Americans. As a result, they surmised, the technology was not familiar enough with darker-skinned people. Yet you still read about surveillance equipment using AI technology misidentifying people of color as animals. And wrongful arrests stemming from faulty facial recognition technology happen too often in policing.

But problems with AI go beyond falsely identifying Blacks and other people of color for crimes they didn’t commit and the devastating impact this has on their families, who often have to hire lawyers to defend them. There are small-business loans that don’t get underwritten because AI algorithms incorrectly toss out some creditworthy applicants. There are low-income neighborhoods that economic development departments overlook because the data they use to determine which projects get incentivized are generated by AI algorithms normed on wealthier neighborhoods. And too often qualified job applicants get screened out early in the process because of keywords that are programmed to flag ethnic-sounding names, minority-serving higher ed alma maters or applicants’ high-crime ZIP codes. Public officials must ensure that those types of discrimination do not continue.

Perhaps beyond the power of public officials to fix by themselves, AI is also reshaping the workforce as more and more jobs are lost to new AI-related technologies. According to a report from the McKinsey Institute for Black Economic Mobility, AI potentially will impact the work sectors where Blacks and other minorities are overrepresented, such as truck driving, food services and office support positions. Employment areas that are expected to be least impacted — ones where African Americans and other minorities are underrepresented — include education and workforce training, creative arts and management, and the medical and legal professions. This problem will place more stress on local governments to address hunger, homelessness, inadequate health care, insufficient workforce development and other problems associated with unemployment.

On the bright side, some states and the federal government have taken positive steps lately. Legislation has been proposed or enacted in California, Connecticut, Massachusetts, New Jersey, Rhode Island and the District of Columbia to ensure that the adoption of AI does not perpetuate bias against protected classes. The Biden-Harris administration issued a blueprint for an AI Bill of Rights that, among other things, contains strong language against algorithmic bias. And last October, the White House issued an executive order mandating that AI be “safe, secure and trustworthy.”

In the 30 or so years I’ve been involved with local government, I have had to cope with challenges brought on by a variety of technologies, ranging from implementing e-government to bridging the digital divide. AI presents the strongest challenge yet for governing, democracy and digital rights because its biases, very much like human beings’, are embedded in systems that cannot be detected by the naked eye. I can’t prove AI’s intentions, but I can certainly feel its effects.



Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.
Government and education columnist
From Our Partners