This article is from the source 'bbc' and was first published or seen on . The next check for changes will be

You can find the current article at its original source at https://www.bbc.com/news/articles/cr78pd7p42ro

The article has changed 4 times. There is an RSS feed of changes available.

Version 0 Version 1
Ban AI apps creating naked images of children, says children's commissioner Ban AI apps creating naked images of children, says children's commissioner
(33 minutes later)
The children's commissioner said the apps were being allowed to go uncheckedThe children's commissioner said the apps were being allowed to go unchecked
The children's commissioner for England is calling on the government to ban apps which use artificial intelligence (AI) to create sexually explicit images of children.The children's commissioner for England is calling on the government to ban apps which use artificial intelligence (AI) to create sexually explicit images of children.
Dame Rachel de Souza said a total ban was needed on apps which allow "nudification" where photos of real people are edited by AI to make them appear naked or can be used to create sexually explicit deepfake images of children. Dame Rachel de Souza said a total ban was needed on apps which allow "nudification" - where photos of real people are edited by AI to make them appear naked - or can be used to create sexually explicit deepfake images of children.
She said the government was allowing such apps to "go unchecked with extreme real-world consequences".She said the government was allowing such apps to "go unchecked with extreme real-world consequences".
A government spokesperson said child sexual abuse material was illegal and that it had introduced further offences for creating, possessing or distributing AI tools designed to create such content. A government spokesperson said child sexual abuse material was illegal and that there were plans for further offences for creating, possessing or distributing AI tools designed to create such content.
Deepfakes are videos, pictures or audio clips made with AI to look or sound real.Deepfakes are videos, pictures or audio clips made with AI to look or sound real.
In a report published on Monday, Dame Rachel said the technology is disproportionately targeting girls and young women with many bespoke apps appearing to only work on female bodies. In a report published on Monday, Dame Rachel said the technology was disproportionately targeting girls and young women with many bespoke apps appearing to work only on female bodies.
She said she also found children were changing their behaviour online to avoid becoming a victim of nudification apps. Girls are actively avoiding posting images or engaging online to reduce the risk of being targeted, according to the report, "in the same way that girls follow other rules to keep themselves safe in the offline world - like not walking home alone at night".
"They fear that anyone a stranger, a classmate, or even a friend could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps," she said. Children feared "a stranger, a classmate, or even a friend" could target them using technologies which could be found on popular search and social media platforms.
"Girls have told me they now actively avoid posting images or engaging online to reduce the risk of being targeted by this technology. Dame Rachel said: "The evolution of these tools is happening at such scale and speed that it can be overwhelming to try and get a grip on the danger they present.
"We cannot sit back and allow these bespoke AI apps to have such a dangerous hold over children's lives.""We cannot sit back and allow these bespoke AI apps to have such a dangerous hold over children's lives."
Dame Rachel also called for the government to:Dame Rachel also called for the government to:
impose legal obligations on developers of generative AI tools to identify and address the risks their products pose to children and take action in mitigating those risksimpose legal obligations on developers of generative AI tools to identify and address the risks their products pose to children and take action in mitigating those risks
set up a systemic process to remove sexually explicit deepfake images of children from the internetset up a systemic process to remove sexually explicit deepfake images of children from the internet
recognise deepfake sexual abuse as a form of violence against women and girls. recognise deepfake sexual abuse as a form of violence against women and girls
Paul Whiteman, general secretary of school leaders' union NAHT, said members shared the commissioner's concerns.Paul Whiteman, general secretary of school leaders' union NAHT, said members shared the commissioner's concerns.
"This is an area that urgently needs to be reviewed as the technology risks outpacing the law and education around it," he told PA. He said: "This is an area that urgently needs to be reviewed as the technology risks outpacing the law and education around it."
It is illegal in England and Wales under the Online Safety Act to share or threaten to share explicit deepfake images.It is illegal in England and Wales under the Online Safety Act to share or threaten to share explicit deepfake images.
The government announced in February laws to tackle the threat of child sexual abuse images being generated by AI, which include making it illegal to possess, create, or distribute AI tools designed to create such material.The government announced in February laws to tackle the threat of child sexual abuse images being generated by AI, which include making it illegal to possess, create, or distribute AI tools designed to create such material.
It said at the time that the Internet Watch Foundation a UK-based charity partly funded by tech firms - had confirmed 245 reports of AI-generated child sexual abuse in 2024 compared with 51 in 2023 a 380% increase. It said at the time that the Internet Watch Foundation - a UK-based charity partly funded by tech firms - had confirmed 245 reports of AI-generated child sexual abuse in 2024 compared with 51 in 2023, a 380% increase.
Media regulator Ofcom published the final version of its Children's Code on Friday, which puts legal requirements on platforms hosting pornography and content encouraging self-harm, suicide or eating disorders, to take more action to prevent access by children.Media regulator Ofcom published the final version of its Children's Code on Friday, which puts legal requirements on platforms hosting pornography and content encouraging self-harm, suicide or eating disorders, to take more action to prevent access by children.
Websites must introduce beefed-up age checks or face big fines, the regulator said.Websites must introduce beefed-up age checks or face big fines, the regulator said.
Dame Rachel has criticised the code saying it prioritises "business interests of technology companies over children's safety".Dame Rachel has criticised the code saying it prioritises "business interests of technology companies over children's safety".
A government spokesperson said creating, possessing or distributing child sexual abuse material, including AI-generated images, is "abhorrent and illegal".A government spokesperson said creating, possessing or distributing child sexual abuse material, including AI-generated images, is "abhorrent and illegal".
"Under the Online Safety Act platforms of all sizes now have to remove this kind of content, or they could face significant fines," they added."Under the Online Safety Act platforms of all sizes now have to remove this kind of content, or they could face significant fines," they added.
"The UK is the first country in the world to introduce further AI child sexual abuse offences - making it illegal to possess, create or distribute AI tools designed to generate heinous child sex abuse material.""The UK is the first country in the world to introduce further AI child sexual abuse offences - making it illegal to possess, create or distribute AI tools designed to generate heinous child sex abuse material."