Apple’s plan to limit the spread of child sexual abuse material has drawn praise from some privacy and security experts as well as child protection advocacy groups. There was also an uproar over the breach of privacy.
These concerns obscure an even more vexing problem that has received little attention: Apple’s new feature uses design elements shown by research to backfire.
One of these new features adds a parental control option to Messages that blocks the viewing of explicit sexual images. The hope is that parental monitoring of a child’s behavior will reduce the viewing or sending of explicit sexual images, but this is highly debated.
We are two psychologists and one computer scientist. We have done extensive research on why people share risky photos online. Our recent research shows that warnings about privacy on social media do not reduce photo-sharing nor increase concerns about privacy. In fact, these warnings, including Apple’s new child safety features, may increase rather than reduce the risky sharing of photos.
Apple’s child safety features
Apple announced on August 5, 2021 that it plans to introduce new child safety features in three regions. The first, relatively non-controversial feature, is that Apple’s search app and virtual assistant Siri will provide parents and children with resources and help to confront potentially harmful content.
The second feature will scan images on people’s devices to look for matches in a database of child sex abuse images provided by the National Center for Missing and Exploited Children and other child protection organizations that are also stored in iCloud Photos. After a threshold for these matches is reached, Apple manually reviews each machine match to confirm the contents of the photo, and then disables the user’s account and sends a report to the center. This feature has generated a lot of controversy.
The last feature adds a parental control option to Messages, Apple’s texting app that blurs out pornographic images when kids try to view them. It also warns kids about the content, offers helpful resources, and reassures them that it’s okay if they don’t want to see the photo. If the child is 12 years old or younger, parents will get a message if the child views or shares a risky photo.
There has been little public discussion of this feature, perhaps because the conventional wisdom is that parental control is necessary and effective. However, this is not always the case, and such warnings can backfire.
When warnings backfire
In general, people are more likely to avoid risky sharing, but it is important to minimize the sharing that occurs. An analysis of 39 studies found that 12% of youth forwarded a sext, or sexually explicit image or video, without consent, and 8.4% forwarded a sext of themselves without consent. Alerts may seem like an appropriate way to do this. Contrary to expectation, we have found that warnings about privacy violations often backfire.
In a series of experiments, we attempted to reduce the likelihood of sharing embarrassing or abusive photos on social media by reminding participants that they should consider the privacy and safety of others. In several studies, we’ve tried different reminders about the consequences of sharing photos, such as the warnings in Apple’s new child safety tool.
Remarkably, our research often reveals contradictory effects. Participants who found the warnings so simple that they should consider the privacy of others were more likely to share the photos than participants who did not receive the warnings. When we started this research, we were sure these privacy nudges would reduce risky photo sharing, but it didn’t.
The results have been consistent as our first two studies showed that the warnings backfired. We have now looked at this effect several times, and found that a number of factors, such as a person’s humor style or experience sharing photos on social media, affect their willingness to share the photos and how they respond to warnings. can give.
While it is not clear why the warnings backfire, one possibility is that individuals’ concerns about privacy are undermined when they underestimate the risks of sharing. Another possibility is the reaction, or a tendency to seemingly unnecessary rules or tendencies to achieve the opposite effect of what was intended. Just as the forbidden fruit turns sweet, so frequent reminders about privacy concerns can make risky photo sharing more appealing.
Will Apple’s Alerts Work?
It is possible that some children may be more inclined to send or receive pornographic photos after receiving a warning from Apple. There can be many reasons for this behavior, ranging from curiosity – teens often learn about sex from peers – to challenging parental authority and reputational concerns, such as by sharing apparently risky photos. look calm. During a phase of life when risk-taking is at its peak, it’s not hard to see how teens might get a warning from Apple to be a badge of honor rather than a real cause for concern.
[Over 110,000 readers rely on The Conversation’s newsletter to understand the world. Sign up today.]
Apple announced on September 3, 2021 that it was delaying the rollout of these new CSAM tools due to concerns expressed by the privacy and security community. The company plans to take additional time in the coming months to gather inputs and make improvements before releasing these child safety features.
That plan isn’t enough, though, without even knowing whether Apple’s new features will have the desired effect on kids’ behavior. We encourage Apple to engage with researchers to ensure that their new tools reduce, rather than encourage, problematic photo sharing.