Hague – The global campaign has led to at least 25 arrests for artificial intelligence and child sexual abuse content distributed online, Europol said Friday.
“Operation Cumberland was one of the first cases. AI-generated child sexual abuse In a statement, the Hague-based European Police Agency has imposed materials that are extremely difficult for investigators as the Hague-based European Police Agency has no national law to deal with these crimes.
The majority of the arrests took place Wednesday during a global operation led by Danish police, and law enforcement agencies from the EU, Australia, the UK, Canada and New Zealand also involved. According to Europol, US law enforcement agencies were not involved in the business.
Following the arrest of the main suspect in November last year, the Danish citizen ran an online platform that distributed AI materials he produced.
“After the iconic online payments, users around the world were able to access the platform and see their children being abused,” Europol said.
Online child sexual exploitation continues to be one of the most threatening symptoms of cybercrime in the European Union, the agency warned.
It said “continuing to be one of the top priorities of law enforcement agencies dealing with the ever-growing number of illegal content,” adding that more arrests are expected as the investigation continues.
Europol said Operation Cumberland was targeted at the platform and targeted by people who share fully-created content using AI, but there was concerning spread online about “deepfake” images that were manipulated by AI.
According to Report Jim Axelrod of CBS News in December focused on one girl who was targeted for such abuse by her classmates, with over 21,000 deep-fark porn photos or videos online in 2023, up over 460% year-on-year. Operational content They multiplied on the internet as lawmakers in the US and elsewhere compete to catch up with new laws to address the issue.
A few weeks ago, the Senate “Take It Down Act” If you sign the law, according to the description of the US Senate website, you will criminalize AI-generated NCII (or the publication of non-consensual intimate images (NCII) containing “deep fake revenge porn” (NCII) (or “deep fake revenge porn”) (or “deep fake revenge porn”) and you will need to implement procedures for social media and similar websites to remove such content.
As it stands, some social media platforms seem unable or unwilling to disrupt the spread of sexually generated deepfake content, including fake images of celebrities. In mid-February, Facebook and Instagram owner Meta said they had deleted more than dozens of fraudulent sexual images of famous female actors and athletes. CBS News Survey Found High prevalence of AI-operated Deepfake images on Facebook.
“This is an industry-wide challenge and we are constantly working to improve detection and enforcement technology,” Meta spokesman Erin Logan told CBS News in a statement sent by email at the time.
AI: Artificial Intelligence
more
more