Facebook’s Proactive Approach to Addressing Nonconsensual Distribution of Intimate Images

It’s well-known that technology has made sharing sexually intimate content easier. While many people share intimate images without any problems, there’s a growing issue with non-consensual distribution of intimate images (NCII[1]), or what is often referred to as “revenge porn.” Perpetrators often share - or threaten to share - intimate images in an effort to control, intimidate, coerce, shame, or humiliate others. A survivor threatened by or already victimized by someone who’s shared their intimate images not only deserves the opportunity to hold their perpetrator accountable, but also should have better options for removing content or keeping it from being posted in the first place.

Recently, Facebook announced a new pilot project aimed at stopping NCII before it can be uploaded onto their platforms. This process gives people who wish to participate the option to submit intimate images or videos they’re concerned someone will share without their permission to a small, select group of specially trained professionals within Facebook. Once submitted, the images are given what’s called a “hash value”, and the actual images are deleted. “Hashing” basically means that the images are turned into a digital code that is a unique identifier, similar to a fingerprint. Once the image has been hashed, Facebook deletes it, and all that’s left is the code. That code is then used as a way for Facebook to identify if someone is attempting to upload the image and prevent it from being posted on Facebook, Messenger, and Instagram.

Facebook’s new pilot project may not be something everyone feels comfortable using, but for some it may bring much peace of mind. For those who believe it may help in their situation, we’ve outlined detailed information about how the process works:

  1. Victims work with a trusted partner. Individuals who believe they’re at risk of NCII and wish to have their images hashed should first contact one of Facebook’s trusted partners: the Cyber Civil Rights Initiative, YWCA Canada, UK Revenge Porn Hotline, and the eSafety Commissioner in Australia. These partners will help them through the process and identify other assistance that may be useful to them.
  2. Partner organizations help ensure appropriate use. The partner organization will carefully discuss the individual’s situation with them before helping them start the hashing process. This helps ensure that individuals are seeking to protect their own image and not trying to misuse the feature against another person. It’s important to note that the feature is meant for adults and not for images of people under 18. If the images are of someone under 18, they will be reported to the National Center for Missing and Exploited Children. Partner organizations will help to explain the reporting process so that individuals can make appropriate decisions for their own case.
  3. The Image will be reviewed by trained staff at Facebook. If the images meet Facebook’s definitions of NCII, a one-time link is sent to the individual’s e-mail. The link will take the individual to a portal where they can directly upload the images. All submissions are then added to a secure review queue where they will be reviewed by a small team specifically trained in reviewing content related to NCII abuse.
  4. NCII will be hashed and deleted: All images that are reviewed and found to meet Facebook’s definition of NCII will be translated into a set of numerical values to create a code called a “hash.” The actual image will then be deleted. If an image is reviewed and Facebook determines it does not match their definition of NCII, the individual will receive an email letting them know (so it’s critical that someone use an email that cannot be accessed by someone else). If the content submitted does not meet Facebook’s definition of NCII, then the concerned individual may still have other options. For example, they may be able to report an image for a violation of Facebook’s Community Standards.
  5. Hashed images will be blocked: If someone tries to upload a copy of the original image that was hashed, Facebook will block the upload and provide a pop-up message notifying the person that their attempted upload violates Facebook’s policies.

This proactive approach has been requested by many victims, and may be appropriate on a case-by-case basis. People who believe they’re at risk of exposure and are considering this process as an option should carefully discuss their situation with one of Facebook’s partner organizations. This will help them make sure they’re fully informed about the process so that they can feel empowered to decide if this is something that’s appropriate for their unique circumstances.  

For more information about how survivors can increase their privacy and safety on Facebook, check out our Facebook Privacy & Safety Guide for Survivors of Abuse.


 

[1] NCII refers to private, sexual content that a perpetrator shares publicly or sends to other individuals without the consent of the victim. How we discuss an issue is essential to resolving it. The term “revenge porn” is misleading, because it suggests that a person shared the intimate images as a reaction to a victim’s behavior.

Privacy Risks and Strategies with Online Dating & Gaming

Both online dating and online gaming are fast-growing industries that are increasingly becoming a regular part of life. Online dating has rapidly gained in popularity as a common way to connect to potential dates or find a partner. And, contrary to popular perception, online gaming is not just a pastime for teenage boys. Many people have concerns about the safety of online dating, often due to widely publicized stories of assault and abuse, and unfortunately, online harassment is an all-too-common experience while playing games online, that can also cross into real life.

Everyone should be able to be online safely, free from harassment and abuse, and that includes dating and gaming. For survivors of domestic violence, sexual assault, and stalking, privacy and safety concerns may be even greater when trying to engage in online spaces. Fortunately, it is possible to increase privacy and safety when dating and gaming online.

Two new resources from Safety Net discuss both risks and strategies, for survivors who want to be active in online dating or gaming communities.

Harassment, threats, and abuse that happen “only” online should be taken seriously. Such experiences can be traumatizing, and may include financial crime or identity theft. Victims report efforts to ruin their reputations and drive them from the online community. If enough identifying information is known, the abuse can also quickly become an offline threat.

If you are concerned about online harassment or abuse, see our Survivor Toolkit for more information about Online Privacy & Safety Tips, guides to Facebook and Twitter, and for resources to assist in documenting abuse.

Online harassment and abuse may fall under a number of crimes, depending on what is happening. To learn more about laws in your state on online harassment, visit WomensLaw.org

Twitter Announces New Safety Features In Latest Effort To Protect Users

twitter logo

Most people who have spent time on Twitter have seen the harassment that can take place within the platform - users taking advantage of the ability to remain anonymous and using it to intimidate, threaten, dox, and otherwise abuse people in a very personal and targeted fashion. In an effort to combat the often rampant abuse on its platform, Twitter announced four new safety features this month. The changes come in large part from the guidance they’ve received from the Twitter Trust and Safety Council (of which NNEDV is a member) and feedback from victims of harassment and abuse on the platform.

  • In the past, even when someone was permanently banned from Twitter for their abusive behavior, it was relatively easy for them to create a new account and continue their harassment. Twitter is now taking steps to identify those people and stop them from being able to create new accounts.
  • The creation of a safe-search mode that will remove Tweets from your search results that contain harmful content, and Tweets that are created by accounts you have blocked or muted. You can turn safe-search mode off and on so that you can still find the abusive content if you want or need to (to monitor an abuser's behavior, collect evidence, or make a report to Twitter, for example).
  • Abusive, confrontational replies that are created by new accounts without many followers, and that are directed at a person who doesn’t follow the account, will be pushed to the bottom of conversations and housed in a section called “less relevant replies.” The replies will still be viewable by those who want to see them, but won’t interrupt productive, civil conversations.

If an account holder has blocked you, but is continuing to mention you in abusive or harassing Tweets, you will now be able to report those Tweets.

We’re pleased to see Twitter take these steps to make their platform a safer place for survivors of harassment and abuse, and we look forward to seeing continued advances in promoting civility and safety online.

For more information on how to increase your safety and privacy on Twitter, be sure to check out our guide to Safety & Privacy on Twitter for survivors of harassment and abuse. It provides tips and guidance for increasing privacy on the social network, and for how to respond to others who misuse the platform.

YouTube’s New Tools Attempt to Address Online Harassment

Online harassment and abuse can take many forms. Threating and hateful comments turn up across online communities from newspapers to blogs to social media. Anyone posting online can be the target of these comments, which cross the line from honest disagreement to vengeful and violent attacks. This behavior is more than someone saying something you don’t like or saying something “mean” – it often includes ongoing harassment that can be nasty, personal, or threatening in nature. For survivors of abuse, threatening comments can be traumatizing, frightening, and can lead some people to not participate in online spaces.

YouTube recently created new tools to combat online abuse occurring within comments. These tools let users who post on their site choose words or phrases to “blacklist” as well as the option to use a beta (or test) version of a filter that will flag potentially inappropriate comments. With both tools, the comments are held for the user’s approval before going public. Users can also select other people to help moderate the comments.

Here’s a summary of the tools, pulled from YouTube:

  • Choose Moderators: This was launched earlier in the year and allows users to give select people they trust the ability to remove public comments.
  • Blacklist Words and Phrases: Users can have comments with select words or phrases held back from being posted until they are approved.
  • Hold Potentially Inappropriate Comments for Review: Currently available in beta, this feature offers an automated system that will flag and hold, according to YouTube’s algorithm, any potentially inappropriate comments for approval before they are published. The algorithm may, of course, pull content that the user thinks is fine, but it will improve in its detection based on the users’ choices.

Survivors who post online know that abusive comments can come in by the hundreds or even thousands. While many sites have offered a way to report or block comments, these steps have only been available after a comment is already public, and each comment may have to be reported one by one. This new approach helps to catch abusive comments before they go live, and takes the pressure off of having to watch the comment feed 24 hours a day.

These tools also offer survivors a means to be proactive in protecting their information and safety. Since many online harassment includes tactics such as doxing (where personal information of someone is posted online with the goal of causing them harm), a YouTube user can add their personal information to the list of words and phrases that are not allowed to be posted. This can include part or all of phone numbers, addresses, email addresses, or usernames of other accounts. Proactively being able to block someone from posting your personal content in this space will be a great tool.

Everyone has the right to express themselves safely online, and survivors should be able to fully participate in online spaces. Connecting with family and friends online helps protect against the isolation that many survivors experience. These new tools can help to protect survivors’ voices online.

Threats on Facebook: LOL, I Didn’t Really Mean It

The Issue

Before the U.S. Supreme Court is a case to decide how courts should determine someone’s online communications: whether it is threatening or is protected First Amendment speech. The specific case is Anthony Douglas Elonis vs. United States. Elonis was convicted of making threats against a variety of people, including his estranged wife on Facebook.

Elonis’ threats of harm against his wife included a rap lyric that said, “Fold up your PFA [protection from abuse order]…is it thick enough to stop a bullet,” as well as a detailed post about how it technically wasn’t illegal for him to say: “the best place to fire a mortar launcher at her house would be from the cornfield behind it because of easy access to a getaway road and you’d have a clear line of sight through the sun room,” accompanied by a diagram. His other threats included wanting to blow up the state police and sheriff departments; threats against an FBI agent; and claims to make a name for himself by “[initiating] the most heinous school shooting ever imagined.”

The issue at hand: should Elonis’ intention of carrying out those threats have to be proven for those threats to be credible? Elonis and his supporters argue that “subjective intent” is the standard to prove threats are real. NNEDV and others argue that “objective intent,” taking into account the content and the context of the statements, is the correct standard to determine the credibility of threats.

Are Online Threats Real?

The perceived anonymity of the internet has allowed many to harass, intimidate, and threaten others, particularly women, in more ways than ever before. In recent years, we have seen a rise in young people committing suicide after online bullying, female bloggers and gamers viciously attacked online, and women being threatened by “anonymous” mobs for daring to speak out on women’s issues.

The reality is that after any kind of threat, victims fear for their safety. They will leave their homes, change their names, change their phone numbers, abandon careers, leave school, and withdraw from online spaces, including major platforms such as Twitter or Facebook. Survivors have gone to great lengths to feel safer. So are online threats real? The consequences to the victims are very real.

Furthermore, in most cases of domestic violence and stalking, online threats aren’t made in isolation. They are often made as part of other abusive behavior, including physical, emotional, or sexual abuse; intimation; harassment; and attempts to control the other person. Nor are victims' fears unfounded. Each day, an average of three women are killed by a former or current intimate partner in the United States. Context is important. If a victim is so terrified of her abuser, and a judge agrees and gives her a protection order that forbids the abuser from even going near the victim—that context matters. So when he goes home and writes a “lyric” about how a protection order will not stop a bullet and posts it online so everyone can see—that is terrifying. This isn’t a Taylor Swift breakup song. This is a threat.

Freedom of Speech

Those in defense of Elonis argue that the intent matters for a variety of reason. The central argument is that if threats are assessed by the victim’s perception of the threat rather than the person’s intent when making the threat, it could chill freedom of speech. In ACLU’s brief, not taking into account the speaker’s intent could result in “self-censorship [so as] to avoid the potentially serious consequences of misjudging how his words will be received.” Moreover, the arguments claim that it is difficult to assess how speech is perceived when made over the internet because the audience’s reaction to online threats could be interpreted in so many different ways.

The Marion B. Brechner First Amendment Project claims that requiring proof of intent is necessary because Elonis’ statements were artistic rap lyrics. The inflammatory and violent statements, while distasteful, were artistic self expression. Those who are unfamiliar with the rap genre, the argument in the brief asserts, could hold negative stereotypes and “falsely and incorrectly interpret them as a threat of violence of unlawful conduct.”

Online Threats and Domestic Violence, Stalking, and Violence Against Women

Their arguments only work if you lived in a world where outlandish speech is only made in the name of art or political speech. In the real world, online threats, particularly against women or intimate partners, are not artistic or political speech. It is violent speech that terrorizes victims. Regardless of whether the abuser or stalker intends to blow up the victim’s home with a mortar or cut her up until she’s soaked in blood and dying (another of Elonis’ online posts), he is accomplishing one of his goals, which is to terrify the victim.

Threats against women cannot be minimized because they happen online. Or because the abuser hasn’t yet carried out the crime. Or because we’re worried that enforcing consequences for these threats will cause people to feel less free to speak their minds and hinder freedom of speech. In the context of domestic violence and stalking, threatening language is threatening. Needing to show subjective intent would make it more difficult to hold abusers and stalkers accountable for terrifying victims and would imply that it’s okay to make such threats, as long as, you know, you didn’t really mean it.

Read our brief for the Elonis v. United States case here

Read our official press release statement here.