Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Governing Luminaries: AI for the People

A pioneer in AI governance talks about why policymakers must shape the wise use of this powerful technology.

GOV_Q1_Luminary.jpg
Kay Firth-Butterfield
Editor's Note: This article appears in Governing's Q1 2026 Magazine. You can subscribe here.

Kay Firth-Butterfield was a leader in responsible use of artificial intelligence before many of us had heard of AI. In 2014, she became the world’s first chief AI ethics officer, establishing the role at an AI startup in Austin, Texas. Three years later, the World Economic Forum recruited her to be its first head of AI and machine learning. In 2024, she received one of four Impact awards given by TIME magazine for her work on AI governance.

Today Firth-Butterfield leads Good Tech Advisory, which advises governments and businesses on successfully adopting and mitigating risks of AI and other emerging technologies. Her new book, Coexisting With AI: Work, Love, and Play in a Changing World, offers practical advice for people and organizations navigating the potential and complexities of AI.

We talked with Firth-Butterfield as she was finishing up multiple speaking sessions at the 2026 World Economic Forum in Davos, Switzerland.

Here are edited excerpts from our interview.

Artificial intelligence is developing so rapidly, and there is so much hype around what these technologies can do. What ethical principles should government leaders prioritize?


Let’s not talk about ethics. That can mean different things to different people. We’re talking about doing the right thing, behaving responsibly. I like the concept of using AI wisely. It’s a tool like no other. It’s not like a hammer. It’s a mimic with a good memory. It can mimic being caring. It can mimic understanding life and death. It can mimic being able to think when all it is doing is working out the next best word or picture — and that can manipulate us poor humans. Let’s use AI wisely and not allow its developers and owners to manipulate us.

A recent study showed that over 25 percent of American [adults] are in a sexual relationship with AI. I think this is because they don’t understand it’s a machine that can’t think, can’t feel, can’t do anything apart from predicting the next word. One of the most important things we can do is increase AI literacy in the general public. It feels to me that governments are beholden to do that as part of their social contract.

I’ll mention one other thing that is a common fallacy among politicians — that regulation will stifle innovation. I drive a sports car, and I bought this particular model because it has a great reputation for safety. Because of regulation, the manufacturer must build it to recognized standards so I can verify its performance and compare it to other models. With AI, we have a tool that can fundamentally harm human beings. Putting some regulation around it won’t stop people from building AI. It will just provide a safety net that we humans need. I’m on the advisory board for a nonprofit called Fathom and one of the things they’re trying to do is create a checkmark that shows an AI tool has been validated by an outside expert as not likely to cause harm.

What does wise AI use look like in city hall or in a state agency? Are there some practical ways to frame it for policymakers?


First of all, you need good procurement — a process that can help you make good decisions from the beginning. There are a lot of people out there selling snake oil — in Davos, everybody on the promenade was selling AI for something. Get help from an independent expert if you’re unsure of how to evaluate AI tools. If you’re considering a tool for hiring, for example, you need to avoid a bad one that could get you sued.

Also, don’t leave AI to your chief technology officer or your chief AI officer. Their job is to build out tech across your organization. Your job as a policymaker is to think about how you serve humans with this tech. That’s completely different. You need the entire C-suite involved, otherwise, you’ll have the CTO doing one thing and maybe the chief human resources officer doing another and there is no conversation going on. Everybody’s siloed. The best AI deployments are guided by a good internal committee that meets frequently — and they don’t leave out risk and compliance and the general counsel. Their job is to deliver services together.

You also need good AI literacy among your employees. The disasters that I am seeing in companies are where everybody’s been told to use an AI tool, but nobody gave them any basic training. That’s when you get work slop, which is when people use AI to create poor-quality work and someone else has to clean up after them. It’s also where you get hallucinations that show up in your reports and other proprietary data.

How could AI fundamentally change the way government operates — things like how services are delivered, how decisions are made or how policies are designed?


The AI we have today is not fit to run our government services. Governments exist to serve the citizens who elect them. It’s different from business, where you can take a bet on something that might make you more profitable. If you start using AI throughout services to your citizens and it makes mistakes, you are fundamentally in breach of the requirement to look after your citizens. We saw an example in Australia where they tried it out and a whole bunch of people had their benefits disrupted because of it.

I think it behooves governments to think very carefully and have small use cases. I say this to both governments and businesses: Find something that is not going to injure anybody and try it and then build on that success. Don’t try to go from zero to 100. Don’t bow to the pressure that you are feeling from the hype of AI.

What new skills or mindsets do public officials and agency managers need to lead effectively now?


One question is how do we train managers today? They are managing individuals; they’re managing individuals who are using AI; and they’re managing AI bots or AI agents. In that ecosystem, how do you make sure AI is a benefit? If we aren’t careful, AI won’t elevate us to the next stage, it will weigh us down with having to constantly check its work.

Another question is, what do we want in our leaders? It’s said that AI can move you from a D grade to a B grade if you use it well, or it can bring you down from an A grade to a B grade if you don’t. Are we looking for somebody who can use AI really well? Or are we still looking for that certain something that sets CEOs apart from others? Is leadership a uniquely human thing, or is it something that can be added to by AI?

What broader societal changes will AI bring, and what will those changes mean for policymakers?


There are some things we all need to think about — that’s why I wrote my book. For example, governments should require labeling on toys that are AI enabled. Parents may think it sounds like a good idea, but we don’t know what the toy teaches. Most of them are made in China; is the data going to China? A lot of these toys have voice and facial recognition — and parents are going blindly into this.

We also hear how everyone needs to learn how to use AI — but not so much about how everyone must learn to use it sensibly, so it doesn’t harm them. There’s a study that shows undergraduate students graduating last year probably didn’t learn enough about their degree subject because they outsourced the work to ChatGPT. That’s a problem for students because they paid their money and they’re less educated. It’s a problem for employers because they can’t assume the degree means someone is employable and it’s a problem for society because people are not learning how to think critically.

Another issue is elder care. We’re an aging population. Should we care for elderly people using AI-enabled robots and AI? Do elderly people need to consent to being cared for by a robot?

So, you’re saying it’s incumbent on state and local officials to determine wise uses for AI in all these different contexts?


Absolutely. AI will be present in every part of our lives. We may need to rethink education curriculum because an AI tool can teach children at their own pace and they might only need a few hours of instruction. The rest of the day could be spent learning social skills, which they won’t get if they’re only interacting with AI. It’ll also reshape issues like health care and elder care — even our intimate lives. It seems like there’s no place AI is not manifesting.

So, we have an opportunity now to shape how AI impacts us. When future leaders look back at this moment, what will distinguish governments that did a good job of using AI wisely versus those that didn’t?


Governments that will have done it right are those that looked carefully at what they were buying, made sure they bought the right tools for the right jobs, trained their workers to use those tools and put the needs of their citizens very firmly at the front. They also really pushed companies to verify that their AI models worked or enacted regulations requiring it.

And more than that, the governments seen as succeeding will be those that really looked into the future and considered what reforms were needed — in health care or service delivery or education — to ensure that humans get the best out of this technology.
Steve Towns is the former editor of Government Technology, and former executive editor for e.Republic LLC, publisher of GOVERNING, Government Technology, Public CIO and Emergency Management magazines. He has more than 20 years of writing and editing experience at newspapers and magazines, including more than 15 years of covering technology in the state and local government market. Steve now serves as the Deputy Chief Content Officer for e.Republic.