Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Artificial Intelligence

After more than 20,000 voters in New Hampshire received a deepfake phone call asking them to skip the state’s January primary, at least 39 states are considering measures to ensure transparency of artificial intelligence used in political ads.
About a quarter of businesses across the nation have adopted AI and many are beginning to use the tech in their hiring process. Only three states currently require employers to ask for consent first if using AI in hiring.
The District of Columbia’s approach isn’t perfect, but overall it’s a balanced and well-thought-out effort that protects individuals and doesn't overly burden businesses. It could serve as a model for other governments.
Six of the state agency’s regional units, including the North Bay area, are testing new video technology that will utilize AI to speed response to fires and other natural disasters as they happen.
Artificial intelligence allows teachers to create virtual reality spaces to help further their students’ education in a protected environment. Many expect to see the region’s businesses soon adopt the tech as well.
We’re already seeing the potential for what tools like ChatGPT can do to improve public services. It’s time for governments at all levels to invest in training their people in the technology.
Does your local government need a stance on generative AI? Boston encourages staff’s “responsible experimentation,” Seattle’s interim policy outlines cautions, and King County considers what responsible generative AI use might be.
Two former Republican governors are already running and a handful more could still announce their candidacy. Meanwhile, artificial intelligence will make political ads even worse and does the Supreme Court even care about corruption?
Artificial intelligence has potential, but it can’t replace simple, reliable technology solutions and the human touch. And there’s a risk that it will automate existing inequities instead of alleviating them.
Colorado has a draft rule that would impose oversight and transparency requirements on insurance companies that use big data about consumers or feed such data into predictive models and algorithms.
Proposed legislation would require the police department to get approval before acquiring any new surveillance technology and would establish an oversight board to monitor the city’s use.
The nation’s first biometric smart gun will use both fingerprint and facial recognition technology to ensure that only authorized users can fire the weapon. The creator hopes it will help reduce accidental deaths and gun suicides.
The ability of the new generation of generative artificial intelligence systems to create convincing but fictional text and images is setting off alarms about fraud and misinformation.
Local-government officials are sometimes overwhelmed by new and improved digital tools. But they need to be open to technology that can help residents and public employees deliver critical services.
Research has found that computer models can predict the likely fate of proposed legislative amendments and the most effective pathways for lobbyists. This technology mixed with micro-legislation could muddle transparency.
A proposed bill would establish an Office of Artificial Intelligence and create a task force to study the emerging technology and establish an AI bill of rights. If passed, the legislation would be the first of its kind.