This article is from the source 'guardian' and was first published or seen on . The next check for changes will be

You can find the current article at its original source at https://www.theguardian.com/technology/2025/jun/22/trump-ban-us-states-ai-regulation-microsoft-eric-horvitz

The article has changed 2 times. There is an RSS feed of changes available.

Version 0 Version 1
Trump’s plan to ban US states from AI regulation will ‘hold us back’, says Microsoft science chief Regulation ‘done properly’ can help with AI progress, says Microsoft chief scientist
(3 days later)
Eric Horvitz’s comments come despite reports that Microsoft is lobbying with Google, Meta and Amazon to support ban Eric Horvitz’s comments come as Donald Trump plans to ban US states from AI regulation for 10 years
Microsoft’s chief scientist has warned that Donald Trump’s proposed ban on state-level guardrails on artificial intelligence will slow the development of the frontier technology rather than accelerate it. Microsoft’s chief scientist has said that regulation, if “done properly”, could actually accelerate advances in artificial intelligence rather than hinder them.
Dr Eric Horvitz, a former technology adviser to Joe Biden, said bans on regulation will “hold us back” and “could be at odds with making good progress on not just advancing the science, but in translating it into practice”. Dr Eric Horvitz, a former technology adviser to Joe Biden, said it was up to scientists to communicate to governments that guidance and controls could potentially speed up progress.
The Trump administration has proposed a 10-year ban on US states creating “any law or regulation limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems”. His comments follow a proposal by the Trump administration of a 10-year ban on US states creating “any law or regulation limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems”.
It is driven in part by White House fears China could otherwise win the race to human-level AI, but also pressure from tech investors, such as Andreessen Horowitz, an early investor in Facebook, which argues consumer uses should be regulated rather than research efforts. Its co-founder, the Trump donor Marc Andreessen, said earlier this month that the US was in a two horse race for AI supremacy with China. The US vice-president, JD Vance, recently said: “If we take a pause, does [China] not take a pause? Then we find ourselves … enslaved to [China]-mediated AI.”It is driven in part by White House fears China could otherwise win the race to human-level AI, but also pressure from tech investors, such as Andreessen Horowitz, an early investor in Facebook, which argues consumer uses should be regulated rather than research efforts. Its co-founder, the Trump donor Marc Andreessen, said earlier this month that the US was in a two horse race for AI supremacy with China. The US vice-president, JD Vance, recently said: “If we take a pause, does [China] not take a pause? Then we find ourselves … enslaved to [China]-mediated AI.”
Speaking at a meeting of the Association for the Advancement of Artificial Intelligence last week, Horvitz said: “It us up to us as scientists to communicate to government agencies, especially those right now who might be making statements about ‘no regulation, this is going to hold us back’. Guidance, regulation… reliability, controls, are part of advancing the field, making the field go faster, in many ways.
“We need to be very cautious about jargon and terms like regulation or bumper stickers that say no regulation because it’s going to slow us down. It can speed us up done properly. We should be cautious and care and communicate to governments about that.”
Horvitz said he was already concerned about “AI being leveraged for misinformation and inappropriate persuasion” and for its use “for malevolent activities, for example, in the biology biological hazard space”.Horvitz said he was already concerned about “AI being leveraged for misinformation and inappropriate persuasion” and for its use “for malevolent activities, for example, in the biology biological hazard space”.
Horvitz’s pro-regulation comments came despite reports that Microsoft is part of a Silicon Valley lobbying push with Google, Meta and Amazon, to support the ban on individual US states regulating AI for the next decade which is included in Trump’s budget bill which is passing through Congress. Horvitz’s comments came despite reports that Microsoft is part of a Silicon Valley lobbying push with Google, Meta and Amazon, to support the ban on individual US states regulating AI for the next decade which is included in Trump’s budget bill which is passing through Congress.
Microsoft is part of a lobbying drive to urge the US Senate to enact a decade-long moratorium on individual states introducing their own efforts to legislate, the Financial Times reported last week. The ban has been written into Trump’s “big beautiful bill” that he wants passed by Independence Day on 4 July.Microsoft is part of a lobbying drive to urge the US Senate to enact a decade-long moratorium on individual states introducing their own efforts to legislate, the Financial Times reported last week. The ban has been written into Trump’s “big beautiful bill” that he wants passed by Independence Day on 4 July.
Horvitz was speaking at a meeting of the the Association for the Advancement of Artificial Intelligence on Monday when he said: “It’s up to us as scientists to communicate to government agencies, especially those right now who might be making statements about no regulation, [that] this is going to hold us back. Speaking at the same seminar as Horvitz, Stuart Russell, the professor of computer science at the University of California, Berkeley, said: “Why would we deliberately allow the release of a technology which even its creators say has a 10% to 30% chance of causing human extinction? We would never accept anything close to that level of risk for any other technology.”
“Guidance, regulation reliability controls are part of advancing the field, making the field go faster in many ways.” The apparent contradiction between Microsoft’s chief scientist and reports of the company’s lobbying effort comes amid rising fears that unregulated AI development could pose catastrophic risks to humanity and is being driven by companies prioritising short-term profit.
Speaking at the same seminar, Stuart Russell, the professor of computer science at the University of California, Berkeley, said: “Why would we deliberately allow the release of a technology which even its creators say has a 10% to 30% chance … of causing human extinction? We would never accept anything close to that level of risk for any other technology.”
Sign up to Business TodaySign up to Business Today
Get set for the working day – we'll point you to all the business news and analysis you need every morningGet set for the working day – we'll point you to all the business news and analysis you need every morning
after newsletter promotionafter newsletter promotion
The apparent contradiction between Microsoft’s chief scientist and reports of the company’s lobbying effort comes amid rising fears that unregulated AI development could pose catastrophic risks to humanity and is being driven by companies prioritising short-term profit.
Microsoft has invested $14bn (£10bn) in OpenAI, the developer of ChatGPT, whose chief executive Sam Altman who this week predicted that: “In five or 10 years we will have great human robots and they will just walk down the street doing stuff … I think that would be one of the moments that … will feel the strangest.”Microsoft has invested $14bn (£10bn) in OpenAI, the developer of ChatGPT, whose chief executive Sam Altman who this week predicted that: “In five or 10 years we will have great human robots and they will just walk down the street doing stuff … I think that would be one of the moments that … will feel the strangest.”
Predictions of when human-level artificial general intelligence (AGI) will be reached vary from a couple of years to decades. The Meta chief scientist, Yann LeCun, has said AGI could be decades away, while last week his boss, Mark Zuckerberg, announced a $15bn investment in a bid to achieve “superintelligence”.Predictions of when human-level artificial general intelligence (AGI) will be reached vary from a couple of years to decades. The Meta chief scientist, Yann LeCun, has said AGI could be decades away, while last week his boss, Mark Zuckerberg, announced a $15bn investment in a bid to achieve “superintelligence”.
Microsoft declined to comment. Fred Humphries, corporate vice president of US government affairs for Microsoft in Washington, D.C., said: “We cannot afford to wake up to a future where 50 different states have enacted 50 conflicting approaches to AI safety and security. That’s why we support federal preemption on frontier models and their security and safety—while still carving out space for states to act in areas where they have traditionally exercised authority.”
The headline and text of this article were amended on 24 June 2025 because an earlier version incorrectly summarised remarks of Eric Horvitz as saying that Trump’s plan to ban US states from AI regulation “will hold us back”. To clarify: he said that AI regulation, done properly, can “speed us up”. For context, more of Eric Horvitz’s remarks have been included and a comment from Microsoft has been added.