The European Parliament has decided not to extend a law that enabled major technology companies to monitor their platforms for instances of child sexual exploitation, leading to concerns from child safety experts about potential increases in undetected crimes.
This legislation, a temporary exception under the EU Privacy Act, was enacted in 2021 to allow the use of automated detection tools for identifying harmful content, including child sexual abuse material (CSAM), grooming activities, and sextortion. The law lapsed on April 3, and the Parliament did not conduct a vote for its renewal, largely due to privacy concerns raised by some members.
The absence of this regulation now creates uncertainty for tech giants, as the scanning of harmful content has become illegal. However, these companies remain obligated to remove any unlawful content they host under the Digital Services Act. In response, Google, Meta, Snap, and Microsoft issued a joint statement on a Google blog, affirming their commitment to voluntarily continue scanning for CSAM.
“We are disheartened by this irresponsible failure to reach an agreement that would sustain ongoing efforts to protect children online,” the statement read.
The European Parliament emphasized its focus on developing legislation aimed at preventing and addressing online child sexual abuse and noted that discussions regarding a permanent legal framework are still in progress, although no timeline for resolution or implementation has been provided.
Advocates for child protection raised alarms that the expiration of this legislation could lead to a significant drop in reports of child sexual abuse. They referenced a similar situation in 2021 when a legal gap resulted in a 58% decline in reports of such material from EU accounts to the National Center for Missing and Exploited Children (NCMEC) over an 18-week period.
John Shehan, vice president at NCMEC, a U.S.-based organization that compiles child abuse reports for law enforcement agencies worldwide, stated, “When detection tools are disrupted, we lose visibility that directly impacts our ability to find and protect victims of child sexual abuse. The abuse does not cease when detection capabilities are diminished.”
In 2025, NCMEC reported receiving 21.3 million reports containing over 61.8 million images, videos, and other files suspected of involving child abuse from across the globe, with approximately 90% of these reports originating from countries outside the United States.
An EU Parliament spokesperson refrained from commenting on whether any assessments were conducted to evaluate the impacts of the law’s expiration.
Experts in child safety noted that the EU’s decision to halt scanning could have broader implications globally. Many online crimes cross borders, and offenders often share illegal images or target children in other nations. Shehan remarked that “sextortionists” posing as romantic partners could exploit the legal ambiguity to their advantage.
“The offender can be situated anywhere globally, yet they could easily access minors in Europe now that there is uncertainty regarding the safeguards and protections needed to identify when a child is being groomed,” Shehan added.
For the past four years, discussions on proposed legislation to combat child sexual abuse have been ongoing, with disagreements arising over the requirement for companies to implement measures that minimize risks on their platforms, according to Hannah Swirsky, head of policy and public affairs at the Internet Watch Foundation, a UK-based child safety non-profit.
Privacy advocates argue that allowing tech companies to scan messages for child abuse infringes on fundamental privacy rights and data security for EU citizens, likening these measures to “chat control” that could result in widespread surveillance and erroneous accusations.
“There are allegations of surveillance or privacy infringements,” Swirsky noted. “Blocking CSAM is not a violation of privacy. Free speech does not encompass the sexual abuse of children.”
The scanning technology employs machine learning to recognize patterns in identifying known abusive images or videos, as well as language associated with child exploitation, without storing any data, explained Emily Slifer, director of policy at Thorn, a non-profit organization that develops technology to detect online child abuse, which is widely utilized by firms and law enforcement.
The system functions by having trained analysts evaluate confirmed CSAM sourced from law enforcement reports, public submissions, or investigations of websites known to host such material. Once analysts verify that content is illegal child sexual abuse, they produce a unique digital fingerprint—referred to as a hash value—that identifies that specific image. Lists of these hash values are then provided to platforms, which use automated systems to scan uploads and promptly block any matching content without requiring human intervention.
“The technology does not identify innocuous images like babies in bathtubs. The characteristics of abusive imagery are distinctly different from consensual content, allowing technology to differentiate between them,” Slifer stated.
While scanning for child abuse has been prohibited, the EU has permitted companies to voluntarily monitor messages for detecting terrorist content under legislation enacted in 2021, she added.
Swirsky remarked that “the EU is effectively creating opportunities for predators.” She urged that if the EU is genuinely committed to safeguarding children online, it must finalize a permanent legislative framework designed to protect children and facilitate detection efforts.

















