ADVERTISEMENT
Alphabet’s AI critics are asking the wrong questionsTechnology’s potential harms to humans have long been a unifying force across the political spectrum, bringing Republican and Democratic lawmakers together to berate social media companies for fueling a mental health crisis.
Bloomberg Opinion
Last Updated IST
<div class="paragraphs"><p>The NLPC called on Google to report on whether it was stealing people’s data to train its AI systems.</p></div>

The NLPC called on Google to report on whether it was stealing people’s data to train its AI systems.

Credit: Reuters Photo

By Parmy Olson

ADVERTISEMENT

A small group of Alphabet Inc. shareholders made strange bedfellows recently when they demanded the company pay more attention to artificial intelligence risks.

The National Legal & Policy Center (NLPC), for instance, was worried about AI’s impact on privacy rights. Inspire Investing — a shareholder that backs “biblically responsible investing” and sometimes targets so-called woke corporate policies — complained it could censor religious and political speech. And the Shareholder Association for Research & Education (SHARE) said Google’s AI could inadvertently erode human rights and fuel discrimination.

Technology’s potential harms to humans have long been a unifying force across the political spectrum, bringing Republican and Democratic lawmakers together to berate social media companies for fueling a mental health crisis. Yet in much the same way those efforts have largely manifested as political theater, so too do these latest critiques of AI.

That Alphabet’s wider shareholders voted down the proposals on the company’s urging doesn’t particularly matter, since they would have done little to help prevent AI hazards. The reason: The trio wanted Alphabet to keep grading its own homework.

The NLPC called on Google to report on whether it was stealing people’s data to train its AI systems. Inspire wanted an assessment of AI bias against religious and political views, and SHARE wanted the company to write a human rights impact study of AI-driven advertising, according to Alphabet’s 2025 proxy statement.

Alphabet stated none of that was necessary since it was conducting adequate research into risks. “We regularly publish AI Responsibility reports, which provide detailed insights into our policies, practices, and processes for developing advanced AI systems,” the company said in a statement.

It would certainly be a good thing to shine a spotlight on whether Google is misusing personal data alongside the other proposals, but all lack any substance by calling on such disclosures to be commissioned by Alphabet itself, and crucially not by independent regulators or researchers. That makes these shareholder proposals look more like performative activism than an effort to create meaningful change, not least because some groups like NLPC have filed similar proposals at several other tech companies this proxy season.

Silicon Valley has long mastered the art of what you might call transparency washing, releasing glossy reports — like this one from Meta Platforms Inc. on hateful conduct on Facebook, or these from Uber Technologies Inc. on safety statistics — that aren’t audited by a third party. The lack of laws requiring disclosures means the companies keep decisions around content moderation, algorithm design and now AI model design entirely opaque, pointing to their detailed reports whenever lawmakers and civil society groups press them with questions.

When OpenAI Chief Executive Officer Sam Altman was asked about AI safety risks during his May 2023 Senate testimony, he similarly talked up the research the company conducted, as well as the “independent audits by independent experts of the models’ performance on various metrics.” But a key ingredient to AI models is their training data, something that OpenAI has for years kept secret. If regulators or researchers could access that data, they could better scrutinize OpenAI’s technology for security flaws, bias or copyright violations. The company has pointed to trade secrets as its reason for keeping that under wraps, but liability is just as likely why.

If Alphabet’s shareholders wanted to go down the challenging road of pushing for real change, they would demand external oversight: independent, technical auditors — say from one of the Big Four accounting firms or academic institutions — to evaluate the company’s systems before they get deployed. Meta made a start in this direction a few years ago by hiring Ernst & Young LLP to audit part of its transparency reports for Facebook, but it could go much further.

Alphabet’s trio of activist shareholders are relatively small and so perhaps don’t have the kind of leverage to influence the financial setting of an annual general meeting. But their voices — and the NLPC’s in particular, given its conservative leanings — might carry more weight if they pushed for some of these regulatory ideas through other political channels. They could, for instance, lobby for lawmakers to set up something like the Food and Drug Administration but for AI, forcing companies to meet certain standards before releasing their tech to the public. Until then, all that bipartisan consensus is being drowned out in a system that will always favor the status quo.

ADVERTISEMENT
(Published 11 June 2025, 15:56 IST)