For years, people looking for help online have clicked their way to an official government page, checking source after source until something looked legitimate. That habit, however, is fading.
Today, many answers about unemployment, food assistance, health care or taxes arrive not from official government sources but as confident paragraphs from a generative artificial intelligence assistant. Whether governments planned for it or not, tools like OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude are becoming the front door to public services. Millions of Americans already use these tools daily, often treating AI-powered bots as general-purpose advice engines.
But when official guidance is outdated, fragmented or buried in formats that machines can’t easily read, AI systems default to retrieving information that’s easiest to find. In an AI-first Internet, the result can be a plausible-sounding answer that nudges someone in the wrong direction. That nudge can be the difference between an accepted or rejected application, or meeting a deadline or missing it.
Even if government information is accurate, it needs to be published in the way people now look for it.
Over the last two decades, government websites evolved for humans clicking around. That produced familiar patterns: eligibility rules embedded in interactive calculators, policy guidance buried in PDFs and dashboards that require clicking through multiple screens to reveal the next requirement.
For a person, that design is intuitive, if sometimes frustrating. For generative AI tools, on the other hand, this design can be functionally invisible. These tools summarize what they can access — and will tend to favor sources that are easy to extract and interpret.
With widespread use of AI assistants, governments need a new posture. Agencies at every level shouldn’t publish solely for humans using browsers. They should also publish for the AI systems that pull, gather and summarize information the moment a person asks a question.
So what should change?
It starts with treating machine-accessible publishing as civic infrastructure. A few concrete steps will help governments get there: publishing program rules, eligibility guidance, required documents and deadlines on stable web pages — ideally alongside PDFs and interactive tools, not inside them. It also means adding visible “last updated” lines or notes on “what’s changed” when new policies are put in place. This requires program teams who control the rules, digital teams who control publishing and leaders who insist that official guidance is treated as mission-critical.
These tasks might feel burdensome. But the alternative is that public-facing policy becomes whatever an AI system can retrieve most easily — leading to missed deadlines, incorrect paperwork and friction for people who can least afford it. If you have ever shown up with the wrong document, or missed a deadline you didn’t know existed, you know how these small errors turn into real delays. That, in turn, results in more calls to understaffed help lines or more back-and-forth with caseworkers — all costs that land on the agency, not just the applicant.
An example of a practical start is happening in Marin County, Calif., where work is getting underway to assess how often generative AI tools cite official county sources versus secondary sites. It’s a start that holds the promise of informing replicable guidance for governments at every level.
In an AI-first Internet, governments cannot choose whether AI mediates public services. That’s already happening. But they can improve the odds that the tools people use surface — and cite — the official, accurate guidance.
This commentary is adapted from the 2025 Impact Reportfrom the Beeck Center for Social Impact + Innovation at Georgetown University. Andrew Merluzzi is the AI innovation and incubation fellow at the Beeck Center. In his previous role at the U.S. Agency for International Development (USAID), he supported grant programs and policy at the intersection of AI and global development. Prior to USAID, he served as a program officer at the National Academies of Sciences, Engineering and Medicine.
Governing's opinion columns reflect the views of their authors and not necessarily those of Governing's editors or management.
Related Articles