A fierce debate over how much to focus on the supposed existential risks of artificial intelligence defined the kickoff of the UK's AI Safety Summit on Wednesday, highlighting broader tensions in the tech community as lawmakers propose regulations and safeguards.
Tech leaders and academics attending the Summit at Bletchley Park, the former home of secret World War II code-breakers, disagreed over whether to prioritize immediate risks from AI — such as fueling discrimination and misinformation — verses concerns that it could lead to the end of human civilization.
Some attendees openly worried so-called AI doomers would dominate the proceedings — a fear compounded by news that Elon Musk would appear alongside British Prime Minister Rishi Sunak shortly after the billionaire raised the specter of AI leading to "the extinction of humanity" on a podcast. On Wednesday, the UK government also unveiled the Bletchley Declaration, a communique signed by 28 countries warning of the potential for AI to cause "catastrophic harm."
"I hope that it doesn't get dominated by the doomer, X-risk, 'Terminator'-scenario discourse, and I'll certainly push the conversation towards practical, near-term harms," said Aidan Gomez, co-founder and chief executive officer of AI company Cohere Inc., ahead of the summit.
Top tech executives spent the week trading rhetorical blows over the subject. Meta Platforms Inc.'s chief AI scientist Yann LeCun accused rivals, including DeepMind co-founder Demis Hassabis, of playing up existential risks of the technology in an attempt "to perform a regulatory capture" of the industry. Hassabis then hit back in an interview with Bloomberg on Wednesday, calling the criticisms preposterous.
On the summit's fringes, Ciaran Martin, the former head of the UK's National Cyber Security Center, said there's "genuine debate between those who take a potentially catastrophic view of AI and those who take the view that it's a series of individual, sometimes-serious problems, that need to be managed."
"While the undertones of that debate are running through all of the discussions," Martin said, "I think there's an acceptance from virtually everybody that the international, public and private communities need to do both. It's a question of degree."
In closed-door sessions at the summit, there were discussions about whether to pause the development of next-generation "frontier" AI models and the "existential threat" this technology may pose "to democracy, human rights, civil rights, fairness, and equality," according to summaries published by the British government late Wednesday.
Between seminars, Musk was "mobbed" and "held court" with delegates from tech companies and civil society, according to a diplomat. But during a session about the risks of losing control of AI, he quietly listened, according to another attendee, who said the seminar was nicknamed the "Group of Death."
Matt Clifford, a representative of the UK Prime Minister who helped organize the summit, tried to square the circle and suggest the disagreement over AI risks wasn't such a dichotomy.
"This summit's not focused on long-term risk; this summit's focused on next year's models," he told reporters on Wednesday. "How do we address potentially catastrophic risks — as it says in the Bletchley Declaration — from those models?" he said. "The 'short term, long term' distinction is very often overblown."
By the end of the summit's first day, there were some signs of a rapprochement between the two camps. Max Tegmark, a professor at the Massachusetts Institute of Technology who previously called to pause the development of powerful AI systems, said "this debate is starting to melt away."
"Those who are concerned about existential risks, loss of control, things like that, realize that to do something about it, they have to support those who are warning about immediate harms," he said, "to get them as allies to start putting safety standards in place."