Over 12 Million Children Worldwide Targeted by Deepfake Abuse

The misuse of artificial intelligence has led to a sharp rise in deepfake-related crimes, with children emerging as the most affected group, according to a recent report. The study reveals that more than 12 million children across the world fall victim to deepfake abuse every year.
The report, released by UNICEF after examining data from 11 countries, highlights the alarming spread of AI-generated fake images, videos, and audio involving minors. The research, conducted between 2023 and early 2025, found that cases were most prevalent in Asia, Africa, and Latin America.
Children in the 12–17 age group were identified as the most vulnerable. Surveys conducted across several countries showed that nearly one in every 25 children has been affected in some way. The data also indicates that girls face a higher risk, accounting for around 64 percent of victims, while boys make up 36 percent. In nearly 90 percent of reported cases, girls were targeted.
Understanding Deepfakes
Deepfakes are created using artificial intelligence to manipulate a person’s face, voice, expressions, or movements, making fake content appear real. Experts warn that distinguishing between genuine and fabricated material has become increasingly difficult.
Common Forms of Deepfake Exploitation
Face swapping: Placing a child’s face onto another person’s body.
AI nudification: Digitally removing clothing from images to create explicit content.
Voice cloning: Replicating voices to spread false messages or threats.
Legal Challenges and Response
Reports suggest that over 50 percent of online content now includes material where authenticity is hard to verify. Many countries are still struggling to establish effective laws to address deepfake crimes.
Situation in India
Creating, sharing, or storing child sexual content is a criminal offense.
Platforms are required to remove deepfake content within 24 hours of detection.
Publishing obscene AI-generated content on social media is punishable by law.