Additionally, AI can be used to automate routine tasks, such as processing paperwork and data entry, freeing up government employees to focus on more complex and value-added tasks. However, there are also concerns about the impact of AI on jobs and privacy, and governments will need to consider these issues as they implement AI-based solutions.
Analysts hail AI as the next wave of technology. So I tried it out with the new chatbot, ChatGPT. I asked it to tell us about the implications of AI for government and federalism, and the bot wrote the first two paragraphs above.
The result: Not too bad. In fact, it might be difficult to distinguish between the chatbot’s contribution and lead paragraphs by a human writer. The paragraphs could use some good editing. There are missing pieces, like just what those “more complex and value-added tasks” might be. But, on the whole, they’re not bad, and they pretty neatly encapsulate the promise and the possible pitfalls of this rapidly developing technology, not only in our daily lives but also for government at all levels. And they raise, at least obliquely, the issue of how this technology should be regulated and by which level of government.
The AI opportunities are growing faster than we can keep up with them. For example, Boston Dynamics has developed an AI-powered robotic dog named Spot. It can deal on its own with complicated tasks, such as traversing unfamiliar terrain and figuring out how to open doors, instead of having an operator walk it through each step in the process. In dangerous situations, deploying Spot instead of police officers can potentially be a real life-saver.
To demonstrate what its robots can do, in fact, Boston Dynamics demonstrated Spot for “60 Minutes” and it also produced a video of dancing dogs that pulled in a lot of views. New York City’s police department was impressed enough with Spot that it leased one of the robots for a test. “This dog is going to save lives, protect people and protect officers, and that’s our goal,” explained Frank Digiacomo, a technical response unit inspector for the department. It proved handy, for example, in a hostage situation, where the NYPD’s robot, christened Digidog, was able to take food in to the hostages.
But Digidog creeped out New Yorkers who saw the mutt on patrol. Critics complained that the robot was aggravating racial conflicts and was being used as a “robotic surveillance ground drone,” as U.S. Rep. Alexandria Ocasio-Cortez put it. Following people’s complaints, New York sent its robot back to the manufacturer early.
In Dallas, a robot — ironically, one intended for bomb disposal — carried an explosive to where a gunman was hiding and blew him up. While that robot was controlled by a human, its use in Dallas nonetheless fueled debate about just how artificial intelligence could and should be used in policing. Those debates spilled over to detection systems such as police surveillance of license plates, the linking of doorbell cameras to police investigations and the tracking of suspects through cellphone pings. The Transportation Security Administration (TSA) is testing facial recognition at selected airports as an advanced system of identity checks, but it’s moving carefully because TSA officials know the furor the system could trigger.
Not everything government might do with AI invites such virulent feedback. In customer service, for example, the public sector can follow the lead of Amazon and Domino’s Pizza, which already provide rich online, AI-driven assistance. AI can accelerate document scanning and processing. It can predict service needs, from the pace of fire and EMT calls to longer-term housing crises, and it can help public employees navigate HR benefits. By analyzing vast quantities of data, it can identify the patterns in public health crises. That’s already proven invaluable as “a pandemic crystal ball” in predicting the spread of new COVID-19 variants.
A New Industrial Revolution
The promises for both the private and public sectors seem endless. Britain’s Chatham House think tank calls AI “a technology which is likely to be as transformative to human history as was the Industrial Revolution.” Deloitte predicts that by 2024 75 percent of governments will be launching at least three enterprisewide AI efforts.
But as illustrated by New York’s experiment with Digidog, the spread of AI poses big challenges. One of the great benefits of AI, for example, is that it can learn quickly, but the algorithms that produce the learning are often hidden from users, so people can worry that the machine isn’t treating them fairly. And what about privacy in the use of the vast amounts of data being collected? My iPhone knows that, most Saturdays, I head off to the grocery store, and it knows which grocery store I visit. When I head over to Whole Foods, it remembers what I’ve purchased. I’m not worried about what Whole Foods might do with my grocery list, but I can understand why many people are alarmed at the idea of government tracking their habits and movements.
And it’s not an unreasonable concern. Government is lagging behind the private sector in developing AI, so at least for now much of government’s use of the technology is through those companies, which means that the information can end up in private hands and that regulating its use will likely fall behind as well.
There’s a dilemma baked into the technology here. People are demanding regulations for self-driving taxis, and they’re waking up to the racial inequality that public health algorithms can bring. The software managing waiting lists for kidney transplants, for example, discriminated against African Americans. The problem isn’t that the algorithms are evil. It’s that they rely on data that fail to account for the needs of everyone and that they don’t learn rapidly enough to correct underlying problems of inequity and violations of privacy.
A Regulatory Race to the Bottom?
The stakes here are high. Estimates of the size of the AI market vary wildly, and it’s predicted to grow to more than $1.5 trillion by 2030. Efforts to mitigate its threats haven’t kept up, McKinsey reports. From asylum applications to criminal justice, the risks are substantial.
Not only do we lack a policy for steering AI, but we don’t have a clear sense of what issues most need regulation — or who ought to do the regulating. The temptation for a “race to the bottom” in regulation is huge to lure a chunk of this enormous market.
The ethical issues here are huge as well, especially since AI systems learn for themselves and what they learn won’t be apparent until the systems act. Once they act, it can be very difficult to determine the values that have been incorporated into the systems. The more governments deploy AI, the more they risk being complicit in the problem.
Because the scale and the stakes are so large, federal regulation makes the most sense. One alternative, for example, would be building federal “guardrails” in what private companies do and how they do it. Take autonomous vehicles, for example: Federal standards have been elusive, and responsibility has been left to the states. A handful of them, including Arizona, California and Michigan, have taken the lead, but as is always the case, allowing the states to shape policy can result in highly uneven standards that can become captive of big private innovators.
There are now 21 states that permit autonomous vehicles. Another six govern only autonomous semi-trucks, including the practice of “platooning” — convoys of 18-wheelers bunched closely to each other to cut wind resistance. The law in Arkansas allows trucks to follow 200 feet behind each other, compared with 500 feet in Michigan, Nevada and Wisconsin. What’s legal where is just a taste of the policy confusion coming down the road.
AI might well be the biggest issue that’s attracting the least attention. Watching Spot is a gee-whiz experience. But debate over the larger issues of safety, privacy, equity, ethics and trust is lagging behind the incredibly rapid development of AI. At least for now, the resolution of those issues lies in the hands of state and local policymakers.
ChatGPT has its own ideas about regulation. And the bot gets it right with a poem I asked it to write for us:
The states may not always agree
But federalism will always be AI’s key.
Governing's opinion columns reflect the views of their authors and not necessarily those of Governing's editors or management.
Related Content