This article is from the source 'guardian' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt

The article has changed 3 times. There is an RSS feed of changes available.

Version 0 Version 1
Letter signed by Elon Musk demanding AI research pause sparks controversy Letter signed by Elon Musk demanding AI research pause sparks controversy
(about 9 hours later)
The statement has been revealed to have false signatures and researchers have condemned its use of their workThe statement has been revealed to have false signatures and researchers have condemned its use of their work
A letter co-signed by Elon Musk and thousands of others demanding a pause in artificial intelligence research has created a firestorm, after the researchers cited in the letter condemned its use of their work, some signatories were revealed to be fake, and others backed out on their support.A letter co-signed by Elon Musk and thousands of others demanding a pause in artificial intelligence research has created a firestorm, after the researchers cited in the letter condemned its use of their work, some signatories were revealed to be fake, and others backed out on their support.
On 22 March more than 1,800 signatories – including Musk, the cognitive scientist Gary Marcus, and Apple co-founder Steve Wozniak – called for a six-month pause on the development of systems “more powerful” than that of GPT-4. Engineers from Amazon, DeepMind, Google, Meta and Microsoft also lent their support. On 22 March more than 1,800 signatories – including Musk, the cognitive scientist Gary Marcus and Apple co-founder Steve Wozniak – called for a six-month pause on the development of systems “more powerful” than that of GPT-4. Engineers from Amazon, DeepMind, Google, Meta and Microsoft also lent their support.
Developed by OpenAI, a company co-founded by Musk and now backed by Microsoft, GPT-4 has developed the ability to hold human-like conversation, compose songs and summarise lengthy documents. Such AI systems with “human-competitive intelligence” pose profound risks to humanity, the letter claimed.Developed by OpenAI, a company co-founded by Musk and now backed by Microsoft, GPT-4 has developed the ability to hold human-like conversation, compose songs and summarise lengthy documents. Such AI systems with “human-competitive intelligence” pose profound risks to humanity, the letter claimed.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter said.“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter said.
The Future of Life institute, the thinktank that coordinated the effort, cited 12 pieces of research from experts including university academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind. But four experts cited in the letter have expressed concern that their research was used to make such claims.The Future of Life institute, the thinktank that coordinated the effort, cited 12 pieces of research from experts including university academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind. But four experts cited in the letter have expressed concern that their research was used to make such claims.
When initially launched, the letter lacked verification protocols for signing and racked up signatures from people who did not actually sign it, including Xi Jinping and Meta’s chief AI scientist Yann LeCun, who clarified on Twitter he did not support it.When initially launched, the letter lacked verification protocols for signing and racked up signatures from people who did not actually sign it, including Xi Jinping and Meta’s chief AI scientist Yann LeCun, who clarified on Twitter he did not support it.
Critics have accused the Future of Life Institute (FLI), which is primarily funded by the Musk Foundation, of prioritising imagined apocalyptic scenarios over more immediate concerns about AI – such as racist or sexist biases being programmed into the machines.Critics have accused the Future of Life Institute (FLI), which is primarily funded by the Musk Foundation, of prioritising imagined apocalyptic scenarios over more immediate concerns about AI – such as racist or sexist biases being programmed into the machines.
Among the research cited was “On the Dangers of Stochastic Parrots”, a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.Among the research cited was “On the Dangers of Stochastic Parrots”, a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.
“By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI,” she said. “Ignoring active harms right now is a privilege that some of us don’t have.”“By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI,” she said. “Ignoring active harms right now is a privilege that some of us don’t have.”
Her co-authors Timnit Gebru and Emily M Bender criticised the letter on Twitter, with the latter branding some of its claims as “unhinged”. Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with her work being mentioned in the letter. She last year co-authored a research paper arguing the widespread use of AI already posed serious risks.Her co-authors Timnit Gebru and Emily M Bender criticised the letter on Twitter, with the latter branding some of its claims as “unhinged”. Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with her work being mentioned in the letter. She last year co-authored a research paper arguing the widespread use of AI already posed serious risks.
Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats.Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats.
She told Reuters: “AI does not need to reach human-level intelligence to exacerbate those risks.”She told Reuters: “AI does not need to reach human-level intelligence to exacerbate those risks.”
“There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention.”“There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention.”
Asked to comment on the criticism, FLI president Max Tegmark said both short-term and long-term risks of AI should be taken seriously. “If we cite someone, it just means we claim they’re endorsing that sentence. It doesn’t mean they’re endorsing the letter, or we endorse everything they think,” he told Reuters. Asked to comment on the criticism, FLI’s president, Max Tegmark, said both short-term and long-term risks of AI should be taken seriously. “If we cite someone, it just means we claim they’re endorsing that sentence. It doesn’t mean they’re endorsing the letter, or we endorse everything they think,” he told Reuters.
Reuters contributed to this report. Reuters contributed to this report
Reuters contributed to this report. Reuters contributed to this report