This article is from the source 'guardian' and was first published or seen on . The next check for changes will be
You can find the current article at its original source at https://www.theguardian.com/technology/2025/jun/08/campainers-urge-uk-watchdog-to-limit-use-of-ai-after-report-of-meta-plan-to-automate-checks
The article has changed 3 times. There is an RSS feed of changes available.
Version 1 | Version 2 |
---|---|
UK campaigners raise alarm over report of Meta plan to use automation for risk checks | |
(about 16 hours later) | |
Ofcom ‘considering the concerns’ raised after claim that up to 90% of risk assessments will be carried out by AI | Ofcom ‘considering the concerns’ raised after claim that up to 90% of risk assessments will be carried out by AI |
Internet safety campaigners have urged the UK’s communications watchdog to limit the use of artificial intelligence in crucial risk assessments after a report that Mark Zuckerberg’s Meta was planning to automate checks. | Internet safety campaigners have urged the UK’s communications watchdog to limit the use of artificial intelligence in crucial risk assessments after a report that Mark Zuckerberg’s Meta was planning to automate checks. |
Ofcom said it was “considering the concerns” raised by the campaigners’ letter, after a report last month that up to 90% of all risk assessments at the owner of Facebook, Instagram and WhatsApp would soon be carried out by AI. | Ofcom said it was “considering the concerns” raised by the campaigners’ letter, after a report last month that up to 90% of all risk assessments at the owner of Facebook, Instagram and WhatsApp would soon be carried out by AI. |
Social media platforms are required under the UK’s Online Safety Act to gauge how harm could take place on their services and how they plan to mitigate those potential harms – with a particular focus on protecting child users and preventing illegal content from appearing. The risk assessment process is viewed as key aspect of the act. | Social media platforms are required under the UK’s Online Safety Act to gauge how harm could take place on their services and how they plan to mitigate those potential harms – with a particular focus on protecting child users and preventing illegal content from appearing. The risk assessment process is viewed as key aspect of the act. |
In a letter to Ofcom’s chief executive, Melanie Dawes, organisations including the Molly Rose Foundation, the NSPCC and the Internet Watch Foundation described the prospect of AI-driven risk assessments as a “retrograde and highly alarming step”. | In a letter to Ofcom’s chief executive, Melanie Dawes, organisations including the Molly Rose Foundation, the NSPCC and the Internet Watch Foundation described the prospect of AI-driven risk assessments as a “retrograde and highly alarming step”. |
They said: “We urge you to publicly assert that risk assessments will not normally be considered as ‘suitable and sufficient’, the standard required by … the act, where these have been wholly or predominantly produced through automation.” | They said: “We urge you to publicly assert that risk assessments will not normally be considered as ‘suitable and sufficient’, the standard required by … the act, where these have been wholly or predominantly produced through automation.” |
The letter also urged the watchdog to “challenge any assumption that platforms can choose to water down their risk assessment processes”. | The letter also urged the watchdog to “challenge any assumption that platforms can choose to water down their risk assessment processes”. |
A spokesperson for Ofcom said: “We’ve been clear that services should tell us who completed, reviewed and approved their risk assessment. We are considering the concerns raised in this letter and will respond in due course.” | A spokesperson for Ofcom said: “We’ve been clear that services should tell us who completed, reviewed and approved their risk assessment. We are considering the concerns raised in this letter and will respond in due course.” |
Sign up to TechScape | Sign up to TechScape |
A weekly dive in to how technology is shaping our lives | A weekly dive in to how technology is shaping our lives |
after newsletter promotion | after newsletter promotion |
Meta said the letter deliberately misstated the company’s approach on safety and it was committed to high standards and complying with regulations. | Meta said the letter deliberately misstated the company’s approach on safety and it was committed to high standards and complying with regulations. |
“We are not using AI to make decisions about risk,” said a Meta spokesperson. “Rather, our experts built a tool that helps teams identify when legal and policy requirements apply to specific products. We use technology, overseen by humans, to improve our ability to manage harmful content and our technological advancements have significantly improved safety outcomes.” | “We are not using AI to make decisions about risk,” said a Meta spokesperson. “Rather, our experts built a tool that helps teams identify when legal and policy requirements apply to specific products. We use technology, overseen by humans, to improve our ability to manage harmful content and our technological advancements have significantly improved safety outcomes.” |
The Molly Rose Foundation organised the letter after the US broadcaster NPR reported last month that updates to Meta’s algorithms and new safety features would mostly be approved by an AI system and no longer scrutinised by staffers. | The Molly Rose Foundation organised the letter after the US broadcaster NPR reported last month that updates to Meta’s algorithms and new safety features would mostly be approved by an AI system and no longer scrutinised by staffers. |
According to one former Meta executive who spoke to NPR anonymously, the change will allow the company to launch app updates and features on Facebook, Instagram and WhatsApp more quickly but will create “higher risks” for users, because potential problems are less likely to be prevented before a new product is released to the public. | According to one former Meta executive who spoke to NPR anonymously, the change will allow the company to launch app updates and features on Facebook, Instagram and WhatsApp more quickly but will create “higher risks” for users, because potential problems are less likely to be prevented before a new product is released to the public. |
NPR also reported that Meta was considering automating reviews for sensitive areas including youth risk and monitoring the spread of falsehoods. | NPR also reported that Meta was considering automating reviews for sensitive areas including youth risk and monitoring the spread of falsehoods. |