US gathers allies to talk AI safety as Trump’s vow to undo Biden’s AI policy overshadows their work
SAN FRANCISCO (AP) — President-elect Donald Trump has vowed to repeal President Joe Biden’s signature artificial intelligence policy when he returns to the White House for a second term.
What that actually means for the future of AI technology remains to be seen. Among those who could use some clarity are the government scientists and AI experts from multiple countries gathering in San Francisco this week to deliberate on AI safety measures.
Hosted by the Biden administration, officials from a number of U.S. allies — among them Australia, Canada, Japan, Kenya, Singapore, the United Kingdom and the 27-nation European Union — began meeting Wednesday in the California city that’s a commercial hub for AI development.
Their agenda addresses topics such as how to better detect and combat a flood of AI-generated deepfakes fueling fraud, harmful impersonation and sexual abuse.
It’s the first such meeting since world leaders agreed at an AI summit in South Korea in May to build a network of publicly backed safety institutes to advance research and testing of the technology.
Judge in Alex Jones' bankruptcy to hear arguments on The Onion's bid for Infowars
Social media sites call for Australia to delay its ban on children younger than 16
Australia withdraws a misinformation bill after critics compare it to censorship
Supreme Court steps into fight over FCC's $8 billion subsidies for internet and phone services
“We have a choice,” said U.S. Commerce Secretary Gina Raimondo to the crowd of officials, academics and private-sector attendees on Wednesday. “We are the ones developing this technology. You are the ones developing this technology. We can decide what it looks like.”
Like other speakers, Raimondo addressed the opportunities and risks of AI — including “the possibility of human extinction” and asked why would we allow that?
“Why would we choose to allow AI to replace us? Why would we choose to allow the deployment of AI that will cause widespread unemployment and societal disruption that goes along with it? Why would we compromise our global security?” she said. “We shouldn’t. In fact, I would argue we have an obligation to keep our eyes at every step wide open to those risks and prevent them from happening. And let’s not let our ambition blind us and allow us to sleepwalk into our own undoing.”
Hong Yuen Poon, deputy secretary of Singapore’s Ministry of Digital Development and Information, said that a “helping-one-another mindset is important” between countries when it comes to AI safety, including with “developing countries which may not have the full resources” to study it.
Biden signed a sweeping AI executive order last year and this year formed the new AI Safety Institute at the National Institute for Standards and Technology, which is part of the Commerce Department.
Trump promised in his presidential campaign platform to “repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.”
But he hasn’t made clear what about the order he dislikes or what he’d do about the AI Safety Institute. Trump’s transition team didn’t respond to emails this week seeking comment.
Addressing concerns about slowing down innovation, Raimondo said she wanted to make it clear that the U.S. AI Safety Institute is not a regulator and also “not in the business of stifling innovation.”
“But here’s the thing. Safety is good for innovation. Safety breeds trust. Trust speeds adoption. Adoption leads to more innovation,” she said.
Tech industry groups — backed by companies including Amazon, Google, Meta and Microsoft — are mostly pleased with the AI safety approach of Biden’s Commerce Department, which has focused on setting voluntary standards. They have pushed for Congress to preserve the new agency and codify its work into law.
Some experts expect the kind of technical work happening at an old military officers’ club at San Francisco’s Presidio National Park this week to proceed regardless of who’s in charge.
“There’s no reason to believe that we’ll be doing a 180 when it comes to the work of the AI Safety Institute,” said Heather West, a senior fellow at the Center for European Policy Analysis. Behind the rhetoric, she said there’s already been overlap.
Trump didn’t spend much time talking about AI during his four years as president, though in 2019 he became the first to sign an executive order about AI. It directed federal agencies to prioritize research and development in the field.
Before that, tech experts were pushing the Trump-era White House for a stronger AI strategy to match what other countries were pursuing. Trump in the waning weeks of his administration signed an executive order promoting the use of “trustworthy” AI in the federal government. Those policies carried over into the Biden administration.
All of that was before the 2022 debut of ChatGPT, which brought public fascination and worry about the possibilities of generative AI and helped spark a boom in AI-affiliated businesses. What’s also different this time is that tech mogul and Trump adviser Elon Musk has been picked to lead a government cost-cutting commission. Musk holds strong opinions about AI’s risks and grudges against some AI industry leaders, particularly ChatGPT maker OpenAI, which he has sued.
Raimondo and other officials sought to press home the idea that AI safety is not a partisan issue.
“And by the way, this room is bigger than politics. Politics is on everybody’s mind. I don’t want to talk about politics. I don’t care what political party you’re in, this is not in Republican interest or Democratic interest,” she said. “It’s frankly in no one’s interest anywhere in the world, in any political party, for AI to be dangerous, or for AI to in get the hands of malicious non-state actors that want to cause destruction and sow chaos.”
——
O’Brien reported from Providence, Rhode Island.