Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

The Nuances of Governing With AI

It is on track to become ubiquitous in public services, but it will introduce unpredicted challenges. Success will require not only an understanding of coding and statistics but also the knowledge that humans apply from their lived experience.

Graffiti covering an abandoned public-transit railcar in Boston
Graffiti covering an abandoned public transit rail car in Boston. The city is using ChatGPT to discover patterns in 311 call center reports, including an in-depth look at graffiti-related issues. (Sasha Fenix/Shutterstock)
During a recent conversation with the Civic Analytics Network, a community-of-practice group of city chief data officers coordinated out of the Bloomberg Center for Cities at Harvard, we caught a glimpse of the not-so-distant, and the more complex, developments that will move generative artificial intelligence from a constant conversation item to an integral part of every aspect of public services.

Generative AI, which uses existing data and other information to create new content, will eventually become a ubiquitous component of public service, enhancing governments’ and residents’ capacities in critical areas from citizen engagement, policy analysis and operations to improved decision-making.

These CDOs predicted that sooner rather than later AI will augment and optimize government’s internal operations. Using its natural language tools to query and process information, such as drafting RFPs, job descriptions, correspondence, emails, meeting notes and other similar documents, will be first up on their list of applications. These activities prioritize organizing objective information from readily available sources, accelerating administrators’ access to best practices and technical expertise.

An interview with Boston CIO Santiago Garces, who often leads the country in creative government technology applications, highlighted for us the truly expansive power of AI in the hands of more-technical staff. Garces took us through his staff’s use of ChatGPT to discover patterns in 311 call center reports, including an in-depth look at graffiti-related issues. Garces essentially is looking to force-multiply his team’s capacity by outsourcing to a tool some of the coding and freeing up more time for analysis.

For instance, Garces and his team utilized OpenAI’s recently released Code Interpreter, a plug-in for ChatGPT-4. The new feature enables web-based AI to accept uploaded data and run various coding languages for data cleaning, analysis and visualization. In effect, the Code Interpreter function makes ChatGPT almost a personal junior-level data analyst, one that takes commands in natural language and produces processed reports in seconds.

These advanced tools significantly compress the time needed to process basic-level data or textual information (such as distribution of urban greenery), hence providing more time for in-depth investigations of specific and related issues (such as correlations among streetscape maintenance spending, 311 calls related to trees and plants, and the quality of urban greenery across the city).

Of course, every breakthrough produces challenges of its own. For example, the prerequisite for interacting effectively and safely with an “AI data analyst” like Boston’s is a decent understanding of coding and statistics. This is mainly because ChatGPT lacks a complete understanding of the underlying intentions behind every request. It therefore becomes critical for human users to frame insightful and clear questions. Also, AI might make mistakes (often not obvious) when asked to perform more-complicated tasks, such as conducting data modeling or generating sophisticated data visualizations. The human capacity to verify AI-generated results and identify irregularities is an indispensable component of ensuring a safe interaction with the technology.

Another major problem, both in terms of the operation of advanced AI features and its human oversight, involves context and tacit knowledge — the often-hard-to-quantify skills, ideas and understandings that humans gain through lived experience. Urban issues are particularly complex, as they usually involve many stakeholders, do not have easily traceable origins and do not point toward a definitive solution. While data-driven analysis and research may be good at revealing existing relationships or describing existing phenomena, they do not necessarily provide direct evidence to address the “why” question.

Tacit knowledge gained from in-person street-level work could be helpful for clarifying the complicated reasons behind an urban issue. For instance, individuals who live in communities ignored for economic or racial reasons may see and understand events quite differently from the way public officials sorting through the data in a downtown building see them. A city employee on a truck in a neighborhood will see and understand localized circumstances with insights unmatched by those not touching the problems in real time, or by an algorithm for that matter.

Nevertheless, generative AI could be a helpful tool to incorporate tacit knowledge. Balancing the proficiency of using AI, the needs of capturing tacit knowledge and the tasks of carefully reviewing AI analysis results is still a critical challenge. To address this challenge, it is expected that the chief data officers and their data analytics teams will play a larger role in arbitrating and reviewing AI-driven analysis.

In short, institutionalizing AI applications is like aiming at a moving target. The evolving nature of AI will constantly introduce unpredicted changes to government's existing organizational setup and external relationships with residents and private-sector collaborators. Our recent conversations showed us that acute awareness, curiosity and caution are essential when integrating this new technological companion, but so are technical expertise and continuing respect for the lessons of life experiences.



Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.
Stephen Goldsmith is the Derek Bok Professor of the Practice of Urban Policy at Harvard Kennedy School and director of Data-Smart City Solutions at the Bloomberg Center for Cities at Harvard University. He can be reached at stephen_goldsmith@harvard.edu.
Juncheng “Tony” Yang is a doctoral candidate at the Harvard Graduate School of Design and a researcher for Data-Smart City Solutions at the Bloomberg Center for Cities at Harvard University. He can be reached at juncheng_yang@gsd.harvard.edu.
From Our Partners