Maybe Alphabet CEO Wants AI Regulations for the Wrong Reasons

Sundar Pichai recently wrote that he wanted stricter regulations for artificial intelligence. But maybe he was just saying that to reshape the federal regulations to look more like his company’s already existing ones.

(TNS) — Google and Alphabet CEO Sundar Pichai has called the advent of artificial intelligence “more profound than ... electricity or fire.”

So his call for government regulations of the technology this week, in an opinion piece in the Financial Times, raised more than a few eyebrows.

“There is no question in my mind that artificial intelligence needs to be regulated,” Pichai wrote. “It is too important not to. The only question is how to approach it.”

Whenever a powerful technology CEO encourages government intervention, everyone who depends on that technology should pay attention.

In this case, Pichai is the head of what may be the world’s most prominent artificial intelligence company. From self-driving cars to the “Smart Compose” tool in Gmail, Google has been at the forefront of this technology — which will, indeed, create tremendous paradigm shifts in everything from employment to emergency preparedness.

As Google seeks to improve its technology and fight off stiff competition in this space, it makes little sense that it would welcome government regulation.

So why would Pichai suggest the opposite?

One big reason is to head off the kind of regulation he doesn’t want. Both the U.S. and the European Union are moving closer to instituting rules for artificial intelligence, and their approaches are already diverging.

This month, the Trump administration released a draft set of guidelines for federal agencies to consider when making rules for artificial intelligence in the private sector, emphasizing that they must “avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”

Meanwhile, the European Union is contemplating sweeping rules, like a five-year ban on facial recognition technology in public spaces.

Google might be just fine with the latter regulation — Google doesn’t sell facial recognition technology, though its rivals Microsoft and Amazon do — but Pichai is clearly concerned about the possibility of conflicting global regulatory standards, and the expense and headaches they will create for the industry.

Moreover, Google has its own set of principles for dealing with artificial intelligence, and it’s been touting them as a potential framework for governments since 2018.

While the company’s framework sounds nice enough — it urges privacy features in private companies’ AI work and asks that they don’t reflect unfair human biases — it’s a far cry from the type of firm public policy that would protect marginalized groups, the democratic process, and other important considerations of global life in 2020.

That’s not going to be good enough for a technology as important and far-reaching as artificial intelligence. Google can, and should be, a critical partner as international governments seek to regulate this industry. But no company should be responsible for creating its own rules — especially with the public’s safety and privacy at stake.

©2020 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.

Special Projects
Sponsored Stories
Creating meaningful citizen experiences in a post-COVID world requires embracing digital initiatives like secure and ethical data sharing, artificial intelligence and more.
GHD identified four themes critical for municipalities to address to reach net-zero by 2050. Will you be ready?
As more state and local jurisdictions have placed a priority on creating sustainable and resilient communities, many have set strong targets to reduce the energy use and greenhouse gases (GHGs) associated with commercial and residential buildings.
As more people get vaccinated and states begin to roll back some of the restrictions put in place due to the COVID-19 pandemic — schools, agencies and workplaces are working on a plan on how to safely return to normal.
The solutions will be a permanent part of government even after the pandemic is over.
See simple ways agencies can improve the citizen engagement experience and make online work environments safer without busting the budget.
Whether your agency is already a well-oiled DevOps machine, or whether you’re just in the beginning stages of adopting a new software development methodology, one thing is certain: The security of your product is a top-of-mind concern.
The World Economic Forum predicts that by 2022, over half of the workforce will require significant reskilling or upskilling to do their jobs—and this data was published prior to the pandemic.
Part math problem and part unrealized social impact, recycling is at a tipping point. While there are critical system improvements to be made, in the end, success depends on millions of small decisions and actions by people.