Child safety, more than any other policy issue, has come to define the outer limits of internet regulation. No other policy frame carries such moral weight or such potential for misuse. More often than not, it sets the stage for debates over surveillance, strong encryption, AI regulation, age verification, and censorship. In these debates, the increasingly exceptional measures demanded by governments are curtailed only – and inconsistently – by international human rights frameworks. The stakes for children, whose safety depends on precise and proportionate interventions, are high, but so too are the stakes for others when child-safety concerns are used to justify broad extension of state power. It is therefore essential that the reach of child-safety regulation be clearly and narrowly defined. Otherwise, tools built for the gravest crimes – surveillance networks, content-scanning mandates, criminal penalties – become scaffolding for a broader regime of control far removed from the protection of children. Elastic definitions in online child-safety policy Yet in these debates, both “child” and “safety” have become elastic concepts. Most attention focuses on the renegotiation of “safety,” with the bar expanding to encompass broader notions of harm – including exposure to challenging ideas or mature themes, rather than tangible risks of abuse or exploitation. Examples include social-media age restrictions, limits on AI chatbots, and filtering of LGBTQ+ or sexual-health content. This debate raises significant human-rights issues, but it is not the focus of this article. Far less examined is the expansion of the concept of the “child” itself: the extension of child-protection frameworks to fictional and artistic representations of children, and a growing willingness to criminalise imaginary depictions as if they were evidence of real crimes. A central harm of this conflation is the diversion of child-safety resources away from real abuse cases and toward the policing of imagination. In the United Kingdom, prosecutions for real child-sexual-abuse images have fallen by nearly 60% since their peak in 2016–17. Over the same period, prosecutions involving fictional content have risen by about a third and now make up roughly 40 percent of all image offences, according to statistics cited in the Drawing the Line Watchlist 2025, released on 10 December 2025 by the Center for Online Safety and Liberty. The report documents examples from ten countries revealing the global consequences of this trend. Just as children are not the beneficiaries of this conflation, its targets are not sex offenders, but more often artists, authors, LGBTQ+ communities, and even children themselves. The Watchlist describes a 17-year-old Costa Rican girl arrested over artwork she posted to her blog, with foreign entities backing both the law and its enforcement. In Australia – which does not even record the distinction between real and virtual sex crimes – the author of a fetish novel featuring adult characters faces charges identical to those applied to people who create and distribute recordings of the rape of a real child. What the evidence does – and doesn’t – show Besides being harmful, treating fictional sexual content as equivalent to lived abuse is unjustified on any legitimate child-safety grounds. Proponents often claim such material normalises or encourages abuse, yet there is no empirical support for this, and emerging research cited in the Watchlist finds no association between exposure to fictional sexual material and risk of offending. This aligns with broader criminological research in which risk assessments draw on interpersonal, developmental, and situational factors rather than on media consumption patterns. Sensitive expressive content – especially pornographic or taboo material – may raise broader social concerns. But where such concerns exist, appropriate responses are more nuanced and fall to trust-and-safety practitioners, educators, and public-health professionals, not police. International human-rights law does not permit criminal sanctions for “the mere tendency of speech to encourage unlawful acts,” as the U.S. Supreme Court held in Ashcroft v. Free Speech Coalition. European and international standards similarly require necessity and proportionality, limiting criminal penalties to preventing or punishing conduct causing real and identifiable harm. Despite this, the proper limits of criminalising expressive content depicting children receive too little scrutiny – and it is not hard to see why. Politicians and interest groups often brand those who question overreach as apologists for abuse. Even less sympathy exists for creators or users of media perceived as sexualising children, illustrated by British MP Jess Phillips’s claim that such people are “just as disgusting” as hands-on offenders. Predictably, almost no mainstream discourse supports drawing a principled line separating fictional works from real abuse material. A categorical reset for child-safety law Yet technology policy should not be driven by disgust – an emotion that tends to unmoor child-safety laws from legality and from their ethical foundation: protecting real children from lived abuse. That experience should not be minimised by conflating it with personal expression. The Watchlist seeks to distinguish expression from abuse and offers recommendations for enforcing a clear separation in law, language, and policy. First, it recommends reserving the term child sexual abuse material (CSAM) exclusively for content documenting an actual crime involving an identifiable victim. Hybrid terms such as CSAEM (child sexual abuse and exploitation material) or AIG-CSAM (AI-generated CSAM) should be abandoned. This distinction should apply throughout the criminal-justice system – in the drafting of criminal and censorship laws, the work of enforcement agencies, the penalties imposed for violations, and the statistics maintained for transparency – each respecting the line between expression and abuse. The Watchlist shows how the creep of child-safety regulation – and the infrastructure accompanying it, from criminal-justice systems to content-scanning mandates – lands very differently when the material is fictional. Blurring the line legitimises the overstepping of human-rights boundaries, over-criminalises marginalised communities, and diverts resources from genuine child protection into the expansion of law-enforcement powers that exceed, and sometimes even undermine, that very aim. Conclusion Child safety is an imperative. But conflating personal expression with abuse does not shield children; it shields policymakers from accountability. It substitutes moral panic for evidence and politics for public health. The human-rights framework exists to guard against such drift:…
Drawing the line: When child safety laws lose sight of real children
