Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AI Regulation Done Right

The District of Columbia’s approach isn’t perfect, but overall it’s a balanced and well-thought-out effort that protects individuals and doesn't overly burden businesses. It could serve as a model for other governments.

Gavel relating to AI regulation, laws or policies
(Shutterstock)
While politicians at the federal level have raised concerns about numerous facets of artificial intelligence, the District of Columbia Council has done what they have not yet: proposed AI anti-discrimination regulations that are reasonable and limited in scope. A far cry from the anti-AI regulations enacted in New York City, which are generating significant concern from businesses, the D.C. bill is a model for other local governments and states — and perhaps even federal policymakers — to follow.

What makes the proposed D.C. regulations such a model? First, they start by applying only to larger firms (those with at least $15 million a year in revenue), companies that make more than half of their revenue through providing data to others and providers of AI services for others. Thus, the potentially significant burden of compliance on small businesses is avoided.

Second, while the use of AI to make biased decisions is rightly proscribed, there are significant differences between the way New York City and the district approach requirements for “bias audits.” New York’s law requires that the audits be conducted by a third party and posted publicly — an invitation for lawsuits for even slight differences in selection rates between groups. While the D.C. proposal does require an annual audit and before new systems are used, the business entity itself conducts and retains the audit; a report must be sent to the district’s attorney general, but public access to that report appears limited.

Overall, what D.C. is considering is a balanced and well-thought-out law. It requires that individuals be told how their data is collected, how it will be used and stored, and for how long. It requires that individuals be notified after an adverse decision and gives them the right to submit corrections that a human reviews. The bill also doesn’t delay businesses’ hiring processes in the way that New York’s two-week-notice period does.

Notably, the D.C. attorney general’s office is given the power to act on violations to promote compliance or punish noncompliance instead of the general deterrent to AI use that New York’s law imposes by requiring public disclosure.

The D.C. Council, and others seeking to use its bill as a model, would be wise to make a few changes. Problematically, the proposed regulation uses an overly broad definition of personal information that ensnares data such as email addresses, online personal identifiers, phone numbers and network hardware addresses. It has general prohibition language that may trap legitimate decision-making that correlates with a covered characteristic. And it requires businesses to provide notice in any language used by 500 or more D.C. residents, which could be burdensome — particularly given that it doesn’t indicate how organizations should determine what languages meet this criterion.

The D.C. proposal is also a bit lax in consumer notification. While allowing businesses to provide notice up to 30 days after a change reduces the burden on firms, it also prevents individuals from making timely, informed decisions about giving their information. Firms should be required to provide notice of current policies at the point of collection and not use data in a way contrary to the policy under which it was collected without giving prior notice and the opportunity to opt out. A grace period could be provided to allow a good-faith update process to be conducted, perhaps requiring an immediate change at a central web-based location and providing a short period for updating physical signage and other materials not directly involved in data collection.

Nevertheless, with regulatory efforts also ongoing in Maine, Massachusetts, New Hampshire, New Jersey, Oregon, Pennsylvania, Rhode Island and Vermont, and stalled efforts in a handful of other states, the district’s proposed regulation would be an excellent starting point for others. It is also an example, more broadly, of a regulation that is responsive to a specific concern regarding the use of AI without infringing on free speech rights or damaging AI technology development. The latter is significant, given the interest of America’s strategic competitors in AI technology development and the dangers posed by overregulation, as demonstrated by at least one of those competitors.

While some changes are needed, the D.C. Council must be commended on its sensible regulatory approach to this emerging and complex issue. Hopefully, with a few changes, this approach will be embraced by others.

Jeremy Straub is the director of North Dakota State University’s (NDSU) Institute for Cyber Security Education and Research, an NDSU Challey Institute Faculty Fellow and an assistant professor in the NDSU Computer Science Department. The author’s opinions are his own.



Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.
From Our Partners