Teachers given new guidance in dealing with AI-generated child sexual abuse material

10 hours ago 1


Guidelines on how to deal with AI-generated child sexual abuse material (CSAM) have been issued to 38,000 teachers and staff across the UK. 

The guidelines are an attempt to help people working with children tackle the "highly disturbing" rise in AI-generated CSAM.

They have been issued by the National Crime Agency (NCA) and the Internet Watch Foundation (IWF).

The AI-generated content is illegal in the UK and is treated the same as any other sexual abuse imagery of children, even if the imagery isn't photorealistic.

"The rise in AI-generated child sexual abuse imagery is highly disturbing and it is vital that every arm of society keeps up with the latest online threats," said safeguarding minister Jess Phillips.

"AI-generated child sexual abuse is illegal and we know that sick predators' activities online often lead to them carrying out the most horrific abuse in person.

"We will not allow technology to be weaponised against children and we will not hesitate to go further to protect our children online," she said.

The guidelines suggest that if young people are using AI to create nude images from each other's pictures - known as nudifying - or creating AI-generating CSAM, they may not be aware that what they're doing is illegal.

Nudifying is when a non-explicit picture of someone is edited to make them appear nude and is increasingly common in "sextortion" cases - when someone is blackmailed with explicit pictures.

"Where an under-18 is creating AI-CSAM, they may think it is 'just a joke' or 'banter' or do so with the intention of blackmailing or harming another child," suggests the guidance.

"They may or may not recognise the illegality or the serious, lasting impact their actions can have on the victim."

Last year, the NCA surveyed teachers and found that over a quarter weren't aware AI-generated CSAM was illegal, and most weren't sure their students were aware either.

More than half of the respondents said guidance was their most urgently needed resource.

 IWF

Image: An IWF analyst at work. Pic: IWF

The IWF has seen an increasing amount of AI-generated CSAM as it scours the internet, processing 380% more reports of the abuse in 2024 than in 2023.

"The creation and distribution of AI-manipulated and fake sexual imagery of a child can have a devastating impact on the victim," said Derek Ray-Hill, interim chief executive at the IWF.

Read more from Sky News:
Major pornography sites to introduce 'robust' age verification for UK users
Doctors using unapproved AI to record patient meetings
Minecraft users targeted by cyber criminals

"It can be used to blackmail and extort young people. There can be no doubt that real harm is inflicted and the capacity to create this type of imagery quickly and easily, even via an app on a phone, is a real cause for concern."

Multiple paedophiles have been sent to jail for using artificial intelligence to create child sexual abuse images in recent years.

Last year, Hugh Nelson was sentenced to 18 years in jail for creating AI-generated CSAM that police officers were able to link back to real children.

"Tackling child sexual abuse is a priority for the NCA and our policing partners, and we will continue to investigate and prosecute individuals who produce, possess, share or search for CSAM, including AI-generated CSAM," said Alex Murray, the NCA's director of threat leadership and policing lead for artificial intelligence.

In February, the government announced that AI tools designed to generate child sex abuse material would be made illegal under "world-leading" legislation.

In the meantime, however, campaigners called for guidance to be issued to teachers.

Laura Bates, the author of a book on the spread of online misogyny, told MPs earlier this month that deepfake pornography "would be the next big sexual violence epidemic facing schools, and people don't even know it is going on."

"It shouldn't be the case that a 12-year-old boy can easily and freely access tools to create these forms of content in the first place," she said.

Read Entire Article