AI: Which rules do the top tech moguls want?
The leaders of tech giants like X, OpenAI or Meta are shaping the debate on how to regulate Artificial Intelligence. This growing dominance is raising concerns among researchers and activists.
As artificial intelligence is reshaping industries and the very fabric of society, policymakers grapple with how to regulate it.
In the debate, the CEOs of the world's leading tech companies have emerged as prominent voices, offering their perspectives on the potential benefits and risks of AI.
However, researchers and activists express concerns over the growing influence of Big Tech on the conversation.
They point to the overwhelming dominance of American companies, raising questions about a lack of representation from other world regions, particularly the Global South.
Moreover, they warn that the growing influence of corporate leaders could overshadow critical issues such as privacy infringement and worker protection.
"We've seen these companies very skillfully manage to set the terms of what the debate should be," Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, told DW.
Here are the most influential voices and what they've been advocating for:
Elon Musk: The prophet of doom
No corporate leader has been as outspoken about the potential existential risks posed by artificial intelligence as Elon Musk, the billionaire entrepreneur who heads several tech corporations, including a new AI venture called xAI.
For years, Musk has been sounding the alarm about AI's potentially catastrophic impact on civilization. As early as 2018, he declared artificial intelligence "far more dangerous than nukes." During a conversation with British Prime Minister Rishi Sunak earlier this month, he reiterated warnings that AI could be "the most disruptive force in history" and called for regulators to act as a "referee."
At the same time, Musk has also cautioned against excessive oversight, telling Sunak that governments should avoid "charging in with regulations that inhibit the positive side of AI."
By emphasizing such existential risks, Musk keeps deflecting attention from pressing technological concerns such as how to safeguard user data or ensure the fairness of AI systems, says Daniel Leufer, a senior policy analyst at digital rights group Access Now in Brussels.
"He is diverting attention from the technology we're dealing with at the moment to things that are quite speculative and often in the realm of science fiction," Leufer told DW.
Sam Altman: The regulators' whisperer
In November 2022, San Francisco-based OpenAI released ChatGPT, becoming the first company to make a large-scale generative AI system available to the public online. Since then, the company's CEO Sam Altman has embarked on a global tour to meet with lawmakers from Washington D.C. to Brussels and to discuss how to regulate AI.
This has catapulted him to the forefront of the debate. During his meetings, Altman warned that high-risk AI applications could cause "significant harm to the world" and needed to be regulated. In the same breath, he has offered OpenAI's expertise to guide policymakers through the complexities of cutting-edge AI systems.
"That's brilliant corporate communications," observed Cambridge University professor Gina Neff, "Essentially, he's saying, 'Don't trust our competitors, don't trust yourselves, trust us to do this work.'"
And yet, Neff warned that OpenAI's approach, while effective in its own interests, may not adequately represent the diverse voices of society: "We call for more democratic accountability and participation in these decisions, and that's not what we're hearing from Altman," she said.
Mark Zuckerberg: The silent giant
When it comes to Meta, another leading company in AI development, CEO Mark Zuckerberg has remained notably quiet in the debate. In a September address to US lawmakers, Zuckerberg advocated for collaboration among policymakers, academics, civil society, and industry "to minimize the potential risks of this new technology, but also to maximize the potential benefits."
Apart from that, he appears to have largely delegated the regulatory discussion to his deputies, such as Meta's President for Global Affairs, Nick Clegg, a former British politician.
On the sidelines of the recent AI summit in the UK, Clegg downplayed fears of existential AI risks and instead emphasized the more immediate threats of AI being used to interfere in upcoming elections in the UK and the US next year. He also advocated for finding short-term solutions to issues such as detecting AI-generated content online.
Dario Amodei: The new kid on the bloc
And then there's Anthropic. Founded in 2021 by former members of OpenAI, the safety-focused AI company has swiftly attracted substantial investments, including a potential $4 billion (€ 3.8 bio) from tech behemoth Amazon — and despite the firm's nascent existence, its CEO Dario Amodei has already carved his niche in the AI regulation debate.
During a recent address to lawmakers at the AI Safety summit at Bletchley Park, Amodei cautioned that while the dangers posed by current AI systems may be relatively limited, they are "likely to become very serious at some unknown point in the near future," he was quoted in a company-issued readout.
To address these looming threats, Amodei presented lawmakers with a methodology developed by his firm: an approach that categorizes AI systems based on the potential risks they pose to the safety of users. A similar methodology, he added, could serve as a "prototype" for how to draft AI regulation.
However, digital rights activist Daniel Leufer of NGO Access Now cautioned against any overreliance on corporate entities like Anthropic to come up with policy.
While their contributions to the debate were needed and helpful, policymakers should maintain their independence, he stressed. "They should absolutely not be the ones who are dictating policy," Leifert said. "We should be very careful about letting them set the agenda."