Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Governing in the Age of the Artificially Intelligent City

The public sector needs a discussion about issues of transparency, fairness and the preservation of human values.

Downtown Hong Kong at night.
The Hong Kong Immigration Department developed a cognitive computing system to sort applications.
(Flickr/Ateens Chen)
The last few years have seen cities around the world embarking on the journey to make their infrastructure and operations more intelligent. Not only are cities investing significant resources to upgrade their physical infrastructure with technology, but analytical solutions are being deployed at ever-increasing rates to exploit large and ever-growing data reservoirs. And governments are seriously exploring machine learning and cognitive computing systems that hold the promise of creating artificially intelligent cities.

These technologies are rolling out at every level of government, but it is in cities where their impact on citizens and services will be felt most directly. What is lacking, though, is enough debate and discussion on how to govern in an age of artificially intelligent cities. There is an opportunity for public agencies to lead the conversation about how to build next-generation computing systems that are responsible and constructed in a manner that preserves and advances human values. Public agencies should establish forums for dialogues on these issues, and they need to do so now.

I define the artificially intelligent city as one where algorithms are the dominant decision-makers and arbitrators of governance protocols -- the rules and frameworks that enable humans and organizations to interact, from traffic lights to tax structures -- and where humans have limited say in the choices presented to them for any given interaction. While one might deem artificially intelligent cities to be in the distant future, or even relegate the concept to the realm of science fiction, such thinking is incorrect.

In fact, machine-learning algorithms are already being deployed in large swaths of city operations, including law enforcement, transportation, utilities, waste management, and housing and urban development. For the most part, today's intelligent systems are focused on assisting or augmenting human decision-makers. Consider an example from the immigration space. The Hong Kong Immigration Department processes millions of visa applications annually. To streamline the process, the department developed a cognitive computing system to sort applications into three broad categories: approved, denied and a gray area. Final decisions still are made by a visa officer, but in the near future these systems will be in the business of autonomous governance, where humans are observers, spectators and auditors.

Many companies are working to build systems using artificial intelligence that seem destined to be deployed in some form by governments. Soul Machines, for instance, is developing customer-service chatbots that, employing machine vision, can understand customers' facial expressions and respond with empathy. IBM and Local Motors are experimenting with a driverless electric shuttle bus that can assist people with disabilities -- for instance, using machine vision and voice cues to help a visually impaired passenger find an empty seat. Two Connecticut professors, Susan and Michael Anderson, are working on building robots that care for seniors and that have been taught ethical behavior.

Taking such a value-sensitive design perspective will call for ensuring that the voices of various stakeholder groups are adequately represented in the technologies being deployed. Doing so requires the public sector to be more open and engaging with citizens than it is today, since our next-generation computing systems will impact how citizens interact with public services and public agencies.

And real challenges exist in integrating these systems with the public workforce. Are we hiring people today on the assumption that they will be augmented with technology in the future? How are we assigning our current experts so that their knowledge can be captured within these systems? Think of a world of government in which, as in our current financial markets, most of the activities (the equivalent of trades) are done between machines without a large amount of human intervention.

Given the need for governments to maintain values of transparency and fairness, they also will need to be wary of some of the kinds of practices we're seeing in the private sector. Staples and Home Depot, for example, were found to be charging differential prices for products based on customers' Zip codes, with customers from lower-income areas paying more. Credit lenders use algorithms, rather than human judgment, to determine which loan applications will be approved. Insurance companies are using algorithms to determine differential premiums. While some may argue that these practices promote efficiency and remove fallible human judgments, customers have little or no power to challenge these decisions.

That approach won't work for government. Stakeholders impacted by system outcomes should be entitled to explanations and procedural justice. As these systems are deployed, agencies will need to find ways to open them up for inspection and audit so that the public understands how they operate and will trust the outcomes those systems generate.

Being a technologist, I have a predisposition to thinking that technology can advance social good and enhance public value. But I am also a realist. Unless we increase the level of our discourse on what it really means to govern in the age of the artificially intelligent city, we risk waking up one day to a new reality that was designed by, and for, the efficiency of machines.

A professor at Queensland University of Technology's QUT Business School
From Our Partners