- Advertisement -spot_img
Monday, July 4, 2022

Big Tech will need to do more to tackle child sexual abuse in EU law, but a key question remains: How?

The European Commission recently proposed rules to protect children by requiring tech companies to scan the content in their systems for child sexual abuse material. It is an extraordinarily broad and ambitious effort that will have wide-ranging implications beyond the borders of the European Union, including the US.

Unfortunately, the proposed rules are, for the most part, technically inexcusable. To the extent that they can work, they need to break end-to-end encryption, which would make it possible for technology companies – and potentially the government and hackers – to see private communications.

The rules, proposed on May 11, 2022, would impose a number of obligations on tech companies that host content and provide communication services, including social media platforms, texting services and direct messaging apps, to detect certain categories of images and text. Huh.

Under the proposal, these companies will have to detect previously identified child sexual abuse material, new child sexual abuse material and solicitation of children for sexual purposes. Companies would be required to report the detected material to the EU Centre, a centralized coordinating body that would establish the proposed rules.

Each of these categories presents its own challenges, which combine to make the proposed rules impossible to implement as a package. The compromise between protecting children and protecting user privacy underscores how combating child sexual abuse online is a “wicked problem”. This puts technology companies in a difficult position: those required to comply with regulations that serve a laudable goal but without the means to do so.

digital fingerprint

Researchers have known about locating previously unrecognized child sexual abuse material for more than a decade. The method, first developed by Microsoft, assigns a “hash value” – a type of digital fingerprint – to an image, which can then be compared with a database of previously identified and hashed child sexual abuse material. Is. In the US, the National Center for Missing and Exploited Children manages several databases of hash values, and some technology companies maintain their own hash sets.

The hash values ​​of images uploaded or shared using a company’s services are compared to these databases to identify previously identified child sexual abuse material. This method has proven to be extremely accurate, reliable and fast, which is vital for making any technical solution scalable.

The problem is that many privacy advocates consider this to be incompatible with end-to-end encryption, which strictly means that only the sender and intended recipient can view the content. Because the proposed EU rules state that tech companies report any child sexual abuse material detection to the EU center, this would violate end-to-end encryption, thus damaging content and users. Will force a trade-off between effective detection of confidentiality.

Here’s how end-to-end encryption works, and which popular messaging apps use it.

Identifying new harmful material

In the case of new content – that is, images and videos not included in the hash database – there is no such tried-and-true technical solution. Top engineers are working on the issue, building and training AI tools that can accommodate vast amounts of data. Both Google and child protection NGO Thorn have had some success using machine-learning classifiers to help companies identify potential new child sexual abuse material.

However, without independently verified data on the accuracy of the tools, it is not possible to assess their usefulness. Even if accuracy and speed are comparable with hash-matching technology, mandatory reporting will again break end-to-end encryption.

The new content also includes livestreams, but the proposed rules ignore the unique challenges of this technology. Livestreaming technology became ubiquitous during the pandemic, and the production of child sexual abuse material from livestreamed content has increased dramatically.

More and more children are being tempted or forced into livestreaming sexually explicit acts, which a viewer can record or screen-capture. Child protection organizations have noted that the production of “alleged first-person child sexual abuse material” – that is, child sexual abuse material of explicit selfies – has grown exponentially over the past few years. In addition, traffickers can livestream child sexual abuse for perpetrators who pay to watch.

The circumstances in which child sex abuse material is recorded and livestreamed are very different, but the technology is similar. And there is currently no technical solution that can detect the production of child sexual abuse material as it does. Tech safety company SafeToNet is developing a real-time detection tool, but it’s not ready to launch.

trace requests

The third category, the “Request Language” detection, is also loaded. The tech industry has made dedicated efforts to pinpoint the indicators needed to identify solicitation and solicitation language, but with mixed results. Microsoft led Project Artemis, which led to the development of the anti-grooming tool. The tool is designed to detect the solicitation and solicitation of a child for sexual purposes.

As the proposed rules state, however, this instrument has an accuracy of 88%. In 2020, the popular messaging app WhatsApp delivered nearly 100 billion messages daily. If the tool detects even 0.01% of messages as “positive” for the language of solicitation, human reviewers would be tasked with reading 10 million messages every day, of which 12% are false positives, making the tool simply impractical. goes.

Like all the detection methods mentioned above, this too will break end-to-end encryption. But while others may be limited to reviewing the hash value of an image, this tool requires access to all exchanged text.

no way

It is possible that the European Commission is taking such an ambitious approach in hopes of spurring technological innovation that will lead to more accurate and reliable detection methods. However, without existing tools that can meet these mandates, regulations are ineffective.

When there is a mandate to take action but no way out, I believe the disconnect will leave the industry without the clear guidance and direction these regulations are intended to provide.

World Nation News Desk
World Nation News Deskhttps://worldnationnews.com/
World Nation News is a digital news portal website. Which provides important and latest breaking news updates to our audience in an effective and efficient ways, like world’s top stories, entertainment, sports, technology and much more news.
Latest news
Related news
- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here