(TNS) — Google and Alphabet CEO Sundar Pichai has called the advent of artificial intelligence “more profound than ... electricity or fire.”

So his call for government regulations of the technology this week, in an opinion piece in the Financial Times, raised more than a few eyebrows.

“There is no question in my mind that artificial intelligence needs to be regulated,” Pichai wrote. “It is too important not to. The only question is how to approach it.”

Whenever a powerful technology CEO encourages government intervention, everyone who depends on that technology should pay attention.

In this case, Pichai is the head of what may be the world’s most prominent artificial intelligence company. From self-driving cars to the “Smart Compose” tool in Gmail, Google has been at the forefront of this technology — which will, indeed, create tremendous paradigm shifts in everything from employment to emergency preparedness.

As Google seeks to improve its technology and fight off stiff competition in this space, it makes little sense that it would welcome government regulation.

So why would Pichai suggest the opposite?

One big reason is to head off the kind of regulation he doesn’t want. Both the U.S. and the European Union are moving closer to instituting rules for artificial intelligence, and their approaches are already diverging.

This month, the Trump administration released a draft set of guidelines for federal agencies to consider when making rules for artificial intelligence in the private sector, emphasizing that they must “avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”

Meanwhile, the European Union is contemplating sweeping rules, like a five-year ban on facial recognition technology in public spaces.

Google might be just fine with the latter regulation — Google doesn’t sell facial recognition technology, though its rivals Microsoft and Amazon do — but Pichai is clearly concerned about the possibility of conflicting global regulatory standards, and the expense and headaches they will create for the industry.

Moreover, Google has its own set of principles for dealing with artificial intelligence, and it’s been touting them as a potential framework for governments since 2018.

While the company’s framework sounds nice enough — it urges privacy features in private companies’ AI work and asks that they don’t reflect unfair human biases — it’s a far cry from the type of firm public policy that would protect marginalized groups, the democratic process, and other important considerations of global life in 2020.

That’s not going to be good enough for a technology as important and far-reaching as artificial intelligence. Google can, and should be, a critical partner as international governments seek to regulate this industry. But no company should be responsible for creating its own rules — especially with the public’s safety and privacy at stake.

©2020 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.