The United Nations Children’s Emergency Fund (Unicef) on Thursday said it was increasingly “alarmed” by reports of AI-generated sexualised images involving children, calling on governments and the AI industry to prevent the creation and dissemination of such content.
In a statement, the UN agency said, “The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up.”
“Deepfakes — images, videos, or audio generated or manipulated with Artificial Intelligence designed to look real — are increasingly being used to produce sexualised content involving children through ‘nudification,’ where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualised images,” Unicef said in its statement.
It added that the “unprecedented” situation presented new challenges for “prevention, education, legal frameworks, and response and support services for children”. But, current prevention efforts were “insufficient when sexual content can be artificially generated”, the statement said.
Unicef’s statement said that the growing prevalence of AI-powered image or video generation tools that produce sexualised material marked a “significant escalation” in risks to children through digital technologies.
The UN body referred to a recent large-scale research, which the agency conducted along with child rights group ECPAT and Interpol, under the ‘Disrupting Harm project’.
The research showed that across 11 countries, at least 1.2 million children reported having had their images manipulated into sexually explicit deepfakes through AI tools in the past year.
In some of the countries, this amounted to 1 in 25 children, or the equivalent of one child in a typical classroom.
“Children themselves are deeply aware of this risk. In some of the countries, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos,” the statement read.
It added that the “levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures”.
“We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material (CSAM).”
The UN agency held that “deepfake abuse is abuse, and there is nothing fake about the harm it causes”.
The statement highlighted that “when a child’s image or identity is used, that child is directly victimised”.
“Even without an identifiable victim, AI-generated CSAM normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children who need help.”
On that note, the UN agency said that it welcomed “efforts of those AI developers who are implementing safety-by-design approaches and robust guardrails to prevent misuse of their systems”.
However, Unicef warned that the landscape still “remains uneven, and many AI models are not being developed with adequate safeguards”.
It stressed that the “risks can be compounded when generative AI tools are embedded directly into social media platforms, where manipulated images spread rapidly”.
“Unicef has urgently called for actions to confront the escalating threat of AI-generated child sexual abuse material,” the UN body’s statement said.
It added that all governments should expand definitions of CSAM to include AI-generated content, and criminalise its creation, procurement, possession and distribution.
Unicef further pressed AI developers to implement safety-by-design approaches and robust guardrails to prevent misuse of AI models.
It further said that “digital companies prevent the circulation of AI-generated child sexual abuse material — not merely remove it after the abuse has occurred; and to strengthen content moderation with investment in detection technologies, so such material can be removed immediately — not days after a report by a victim or their representative”.
The issue of AI-generated sexualised images most recently gained attention after Elon Musk’s platform X came under fire after users were found exploiting the platform’s artificial intelligence tool, Grok, to edit and manipulate photographs, including digitally undressing people — among them children — or portraying them in minimal clothing.
The European Commission has also said that it would investigate whether social media platform X protected consumers by properly assessing and mitigating risks related to Grok’s functionalities.
