Facebook’s Proactive Approach to Addressing Nonconsensual Distribution of Intimate Images

It’s well-known that technology has made sharing sexually intimate content easier. While many people share intimate images without any problems, there’s a growing issue with non-consensual distribution of intimate images (NCII[1]), or what is often referred to as “revenge porn.” Perpetrators often share - or threaten to share - intimate images in an effort to control, intimidate, coerce, shame, or humiliate others. A survivor threatened by or already victimized by someone who’s shared their intimate images not only deserves the opportunity to hold their perpetrator accountable, but also should have better options for removing content or keeping it from being posted in the first place.

Recently, Facebook announced a new pilot project aimed at stopping NCII before it can be uploaded onto their platforms. This process gives people who wish to participate the option to submit intimate images or videos they’re concerned someone will share without their permission to a small, select group of specially trained professionals within Facebook. Once submitted, the images are given what’s called a “hash value”, and the actual images are deleted. “Hashing” basically means that the images are turned into a digital code that is a unique identifier, similar to a fingerprint. Once the image has been hashed, Facebook deletes it, and all that’s left is the code. That code is then used as a way for Facebook to identify if someone is attempting to upload the image and prevent it from being posted on Facebook, Messenger, and Instagram.

Facebook’s new pilot project may not be something everyone feels comfortable using, but for some it may bring much peace of mind. For those who believe it may help in their situation, we’ve outlined detailed information about how the process works:

  1. Victims work with a trusted partner. Individuals who believe they’re at risk of NCII and wish to have their images hashed should first contact one of Facebook’s trusted partners: the Cyber Civil Rights Initiative, YWCA Canada, UK Revenge Porn Hotline, and the eSafety Commissioner in Australia. These partners will help them through the process and identify other assistance that may be useful to them.
  2. Partner organizations help ensure appropriate use. The partner organization will carefully discuss the individual’s situation with them before helping them start the hashing process. This helps ensure that individuals are seeking to protect their own image and not trying to misuse the feature against another person. It’s important to note that the feature is meant for adults and not for images of people under 18. If the images are of someone under 18, they will be reported to the National Center for Missing and Exploited Children. Partner organizations will help to explain the reporting process so that individuals can make appropriate decisions for their own case.
  3. The Image will be reviewed by trained staff at Facebook. If the images meet Facebook’s definitions of NCII, a one-time link is sent to the individual’s e-mail. The link will take the individual to a portal where they can directly upload the images. All submissions are then added to a secure review queue where they will be reviewed by a small team specifically trained in reviewing content related to NCII abuse.
  4. NCII will be hashed and deleted: All images that are reviewed and found to meet Facebook’s definition of NCII will be translated into a set of numerical values to create a code called a “hash.” The actual image will then be deleted. If an image is reviewed and Facebook determines it does not match their definition of NCII, the individual will receive an email letting them know (so it’s critical that someone use an email that cannot be accessed by someone else). If the content submitted does not meet Facebook’s definition of NCII, then the concerned individual may still have other options. For example, they may be able to report an image for a violation of Facebook’s Community Standards.
  5. Hashed images will be blocked: If someone tries to upload a copy of the original image that was hashed, Facebook will block the upload and provide a pop-up message notifying the person that their attempted upload violates Facebook’s policies.

This proactive approach has been requested by many victims, and may be appropriate on a case-by-case basis. People who believe they’re at risk of exposure and are considering this process as an option should carefully discuss their situation with one of Facebook’s partner organizations. This will help them make sure they’re fully informed about the process so that they can feel empowered to decide if this is something that’s appropriate for their unique circumstances.  

For more information about how survivors can increase their privacy and safety on Facebook, check out our Facebook Privacy & Safety Guide for Survivors of Abuse.


 

[1] NCII refers to private, sexual content that a perpetrator shares publicly or sends to other individuals without the consent of the victim. How we discuss an issue is essential to resolving it. The term “revenge porn” is misleading, because it suggests that a person shared the intimate images as a reaction to a victim’s behavior.

Cambridge Analytica and Why Privacy Matters to Survivors

Recent news that the personal information of tens of millions of people was used by Cambridge Analytica “to create algorithms aimed at ‘breaking’ American democracy” as the New Yorker phrases it, has led to a call to #DeleteFacebook. For those unfamiliar with the story, our friends at AccessNow wrote a great summary.

This kind of invasion of privacy is not new, nor is it limited to this case. The old expression, “No free lunch,” applies to any service that we don’t pay for, whether it is social media or a discount card at the grocery store or entering a raffle to win a new car. The true cost is allowing those companies to access our personal information for their own profit.

Safety is the primary concern. For survivors who face threats of harm, who live daily in fear from the abusers, the security of personal information can be a life and death issue. For survivors fleeing an abuser, information about location, work, kids’ schools, and social connections can lead an abuser to the doorstep. For survivors living with abuse, information about friends, thoughts, feelings, opinions, and interests can be misused by an abuser to control, isolate, or humiliate.

For survivors, privacy is not an abstract issue, or a theoretical right to be debated on CSPAN. Privacy is essential to safety, to dignity, to independence. Yet, we live in a time when personal information = profit.

The Cambridge Analytica story surfaces the underlying reality that our personal information is not under our control. It feels like we are seldom asked for consent to share our personal data. When we are, it is in legalese, in tiny letters that we might have to scroll through to be able to check that box, and get on with using whatever website we’re trying to use. Even if we do take the time to read through those privacy terms, we know that data is routinely stolen, or accidentally published on the Internet, or used against us to affect access to loans, insurance, employment, and services.

We are social animals. We crave connection. Research shows that we suffer without it. Isolation is a classic tactic of abuse. But the price we too often pay for connection online is our privacy.

At times like these, we may think about deleting Facebook, going offline, or throwing away our phones. We may think that survivors should give up their tech at the door of our shelters, or that they have to go off the grid in order to be safe.

Digital exile is not the answer. Technology, and the Internet, is a public space where everyone, including survivors, should have the right, to share their voices, to make connections, and to access information without fear of their personal information being collected and used without their consent. April Glaser writes in Slate that, “[d]eleting Facebook is a privilege,” pointing to the huge number of people that rely on it to connect with friends, to learn about events, to promote a business, or, in parts of the world with limited Internet access, just to be online at all.

Survivors, just like every other consumer, should be given the opportunity to give truly informed consent. That consent must be based on clear, simple, meaningful, understandable privacy policies and practices – not just a check box that no one pays attention to.

A guide to the process of changing your Facebook settings to control apps’ access to your data is available from the Electronic Frontier Foundation. Also check out our own guides to Online Privacy and Facebook Privacy and Safety.

Privacy Risks and Strategies with Online Dating & Gaming

Both online dating and online gaming are fast-growing industries that are increasingly becoming a regular part of life. Online dating has rapidly gained in popularity as a common way to connect to potential dates or find a partner. And, contrary to popular perception, online gaming is not just a pastime for teenage boys. Many people have concerns about the safety of online dating, often due to widely publicized stories of assault and abuse, and unfortunately, online harassment is an all-too-common experience while playing games online, that can also cross into real life.

Everyone should be able to be online safely, free from harassment and abuse, and that includes dating and gaming. For survivors of domestic violence, sexual assault, and stalking, privacy and safety concerns may be even greater when trying to engage in online spaces. Fortunately, it is possible to increase privacy and safety when dating and gaming online.

Two new resources from Safety Net discuss both risks and strategies, for survivors who want to be active in online dating or gaming communities.

Harassment, threats, and abuse that happen “only” online should be taken seriously. Such experiences can be traumatizing, and may include financial crime or identity theft. Victims report efforts to ruin their reputations and drive them from the online community. If enough identifying information is known, the abuse can also quickly become an offline threat.

If you are concerned about online harassment or abuse, see our Survivor Toolkit for more information about Online Privacy & Safety Tips, guides to Facebook and Twitter, and for resources to assist in documenting abuse.

Online harassment and abuse may fall under a number of crimes, depending on what is happening. To learn more about laws in your state on online harassment, visit WomensLaw.org

Safety Check

If you think your activities (online and offline) are being monitored, you are probably right. People who are abusive often want to know their victim’s every move and interaction. If this is something you’re experiencing, it’s important to think through how they might be tracking your online activity. These tips can help you think through how to access information online more safely:

  • Computers, mobile devices, and online accounts store a lot of private information about what you view online – the websites you visit (like this one), the things you search for, the emails and instant messages you send, the online videos you watch, the things you post on social media, the online phone or IP-TTY calls you make, your online banking and purchasing, and many others. 
  • If your mobile device or computer are easily accessible to the abuser, be careful how you use it. You may want to keep using those devices for activities that won’t trigger violence – like looking up the weather – and find safe devices (like a public computer at the library) to look up information about how to get help.
  • If the person who is abusive has access to your online accounts (social media, email, phone bill, etc), or has had access to them in the past, it is often helpful to update the usernames and passwords for those accounts from a safer device.
  • You can also set up a new email address that they aren’t aware of, and connect your online accounts to it (rather than the old email address they know). It can be helpful to make the new address something that is more anonymous, instead of using your actual name or a handle you are already known by.
  • Keep in mind, if you think you are being monitored, it might be dangerous to suddenly stop your online activity or stop them from accessing your accounts. You may want to keep using those devices or accounts for activities that won’t trigger violence – and find safer devices (like a public computer at the library) and accounts to look up information about how to get help, or to communicate with people privately.
  • Email, instant messaging and text messaging with domestic violence agencies leaves a detailed digital trail of your communication, and can increase the risk that your abuser will know not only that you communicated, but the details of what you communicated. When possible, it’s best to call a hotline. If you use email, instant messaging, or text messaging, try to do so on a device and account that the abuser doesn’t know about or have access to, and remember to erase any messages you don’t want the abusive partner to see.

Check out NNEDV’s Technology Safety & Privacy Toolkit for Survivors for more important information.

This project was supported by Grant No. 2016-TA-AX-K069 awarded by the Office on Violence Against Women, U.S. Department of Justice. The opinions, findings, conclusions, and recommendations expressed in this program are those of the author(s) and do not necessarily reflect the views of the Department of Justice, Office on Violence Against Women.

 

Data Privacy Day: Honoring A Survivor’s Right To Safely Access Technology

person on computer

When a survivor reaches out to a domestic violence program for help, it’s often as a last resort and with much trepidation. Social connection, access to financial resources, and a safe home have often been systematically stripped away from them by their abuser. Smartphones, email, and social media accounts are often the last remnants of their connection to support, and can serve as an important lifeline when they’re in danger.

Yet we often hear from survivors that when they’ve reached out for help about the harassment, stalking, and abuse they’ve experienced through technology and social media, the only advice they get is to completely disconnect from technology and delete their accounts. But this places the blame in the wrong place. The technology isn’t the issue; the abuser’s behavior is. And worse yet, this response punishes the victim for the abuse they’ve suffered, forcing them to become more isolated because their only option is to disconnect. It also impacts their safety; if a survivor is in need of help but can no longer access their support systems, the risk of danger can increase dramatically.

This Data Privacy Day, we celebrate a survivor’s right to safely access technology, and encourage programs to proactively safety plan with survivors to help them feel empowered and safe with their technology use. We need to view safe access to technology, the internet and social media as a fundamental right of survivors. Technology is a necessity in our everyday lives, and removing it is not a feasible option. Instead, domestic violence programs can help survivors not only find temporary refuge, but also help them build a new skill that will empower them to stay connected, feel less isolated, and have communication tools that can help them in emergency situations.

The Safety Net Project develops tools and resources that help both survivors and victim service agencies become more informed about how to safely use technology, and about how abusers might misuse technology to stalk and harass. On Data Privacy Day, we encourage you to explore these tools listed below, and to reach out to us with any questions you may have about the safe use of technology.

  • The TechSafety App - This app was created for anyone who thinks they might be experiencing harassment or abuse through technology or who wants to learn more about how to increase their privacy and security while using technology.
  • Technology Safety & Privacy Toolkit For Survivors - Survivors of domestic violence, sexual assault, stalking, and trafficking often need information on how to be safe while using technology. This toolkit provides safety tips, information, and privacy strategies to help survivors respond to potential technology misues and to increase their safety and privacy.
  • The App Safety Center - There’s an app for everything, right? An increasing number of apps for smartphones and tablets are attempting to address the issues of domestic violence, sexual assault, and/or stalking. With so many apps, knowing which ones to use can be difficult. The App Safety Center will highlight some of these apps by providing information on what survivors and professionals need to know to use them safely.
  • Agency’s Use of Technology: Best Practices & Policies Toolkit - The way domestic violence, sexual assault, and other victim service agencies use technology can impact the security, privacy, and safety of the survivors who access their services. This toolkit contains recommended best practices, policy suggestions, and handouts on the use of common technologies. 

YouTube’s New Tools Attempt to Address Online Harassment

Online harassment and abuse can take many forms. Threating and hateful comments turn up across online communities from newspapers to blogs to social media. Anyone posting online can be the target of these comments, which cross the line from honest disagreement to vengeful and violent attacks. This behavior is more than someone saying something you don’t like or saying something “mean” – it often includes ongoing harassment that can be nasty, personal, or threatening in nature. For survivors of abuse, threatening comments can be traumatizing, frightening, and can lead some people to not participate in online spaces.

YouTube recently created new tools to combat online abuse occurring within comments. These tools let users who post on their site choose words or phrases to “blacklist” as well as the option to use a beta (or test) version of a filter that will flag potentially inappropriate comments. With both tools, the comments are held for the user’s approval before going public. Users can also select other people to help moderate the comments.

Here’s a summary of the tools, pulled from YouTube:

  • Choose Moderators: This was launched earlier in the year and allows users to give select people they trust the ability to remove public comments.
  • Blacklist Words and Phrases: Users can have comments with select words or phrases held back from being posted until they are approved.
  • Hold Potentially Inappropriate Comments for Review: Currently available in beta, this feature offers an automated system that will flag and hold, according to YouTube’s algorithm, any potentially inappropriate comments for approval before they are published. The algorithm may, of course, pull content that the user thinks is fine, but it will improve in its detection based on the users’ choices.

Survivors who post online know that abusive comments can come in by the hundreds or even thousands. While many sites have offered a way to report or block comments, these steps have only been available after a comment is already public, and each comment may have to be reported one by one. This new approach helps to catch abusive comments before they go live, and takes the pressure off of having to watch the comment feed 24 hours a day.

These tools also offer survivors a means to be proactive in protecting their information and safety. Since many online harassment includes tactics such as doxing (where personal information of someone is posted online with the goal of causing them harm), a YouTube user can add their personal information to the list of words and phrases that are not allowed to be posted. This can include part or all of phone numbers, addresses, email addresses, or usernames of other accounts. Proactively being able to block someone from posting your personal content in this space will be a great tool.

Everyone has the right to express themselves safely online, and survivors should be able to fully participate in online spaces. Connecting with family and friends online helps protect against the isolation that many survivors experience. These new tools can help to protect survivors’ voices online.

10 Steps to a More Secure Password

Today is World Password Day, and a reminder that you should change your password. Passwords are used for almost everything we do these days because, without a password, anyone can get into all your stuff: your phone, email, bank account, social media, etc. 

Here are some tips on how to create a secure password:

  1. Pick a password that will be hard for someone else to guess.
  2. Use different passwords for different accounts.
  3. Best passwords are longer than 8 characters and contain numbers and symbols.
  4. Keep your passwords simple, so you can remember it. 
  5. Share your password with no one. 
  6. Use 2-step verification/authentication (where you use your password as well as a code that's sent to your phone or email). 
  7. Uncheck the “remember me” or “keep me logged in” feature. 
  8. Always remember to log off. 
  9. Change your password often (today, for instance, on World Password Day!).
  10. Be strategic with secret questions and answers.

For more explanation on these tips, check out our handout on Password: Simple Ways to Increase Your Security.  

5 Ways to Make the Internet a Safer Place

Tomorrow is Safer Internet Day, an international day aimed at growing public awareness to show that it takes all of us to make the internet great for everybody.

Here are some things we can all do to make the Internet a safer, happier, nicer place to be.

1. Be nice.

Don’t say mean things about other people online.

source: giphy.com

source: giphy.com

2. Get consent first.

Ask permission before you post anything about someone, whether it's a picture of them or you're just mentioning them.

Source: youtube.com

Source: youtube.com

3. Speak up.

If someone’s being mean to another person online, don’t be afraid to say something. There are others out there who are also thinking the same thing and will be more likely to speak up if you do. 

Source: youtube.com

Source: youtube.com

4. Haters gonna hate.

So just shake it off. Be positive - it's infectious, so go ahead and make it viral. 

Source: giphy.com

Source: giphy.com

5. Love the interwebs.

Treat it (and the people who use it) well and it will love you back.

Source: giphy.com

Source: giphy.com

Learn more about Safer Internet Day and how you can get involved here. 

Threats on Facebook: LOL, I Didn’t Really Mean It

The Issue

Before the U.S. Supreme Court is a case to decide how courts should determine someone’s online communications: whether it is threatening or is protected First Amendment speech. The specific case is Anthony Douglas Elonis vs. United States. Elonis was convicted of making threats against a variety of people, including his estranged wife on Facebook.

Elonis’ threats of harm against his wife included a rap lyric that said, “Fold up your PFA [protection from abuse order]…is it thick enough to stop a bullet,” as well as a detailed post about how it technically wasn’t illegal for him to say: “the best place to fire a mortar launcher at her house would be from the cornfield behind it because of easy access to a getaway road and you’d have a clear line of sight through the sun room,” accompanied by a diagram. His other threats included wanting to blow up the state police and sheriff departments; threats against an FBI agent; and claims to make a name for himself by “[initiating] the most heinous school shooting ever imagined.”

The issue at hand: should Elonis’ intention of carrying out those threats have to be proven for those threats to be credible? Elonis and his supporters argue that “subjective intent” is the standard to prove threats are real. NNEDV and others argue that “objective intent,” taking into account the content and the context of the statements, is the correct standard to determine the credibility of threats.

Are Online Threats Real?

The perceived anonymity of the internet has allowed many to harass, intimidate, and threaten others, particularly women, in more ways than ever before. In recent years, we have seen a rise in young people committing suicide after online bullying, female bloggers and gamers viciously attacked online, and women being threatened by “anonymous” mobs for daring to speak out on women’s issues.

The reality is that after any kind of threat, victims fear for their safety. They will leave their homes, change their names, change their phone numbers, abandon careers, leave school, and withdraw from online spaces, including major platforms such as Twitter or Facebook. Survivors have gone to great lengths to feel safer. So are online threats real? The consequences to the victims are very real.

Furthermore, in most cases of domestic violence and stalking, online threats aren’t made in isolation. They are often made as part of other abusive behavior, including physical, emotional, or sexual abuse; intimation; harassment; and attempts to control the other person. Nor are victims' fears unfounded. Each day, an average of three women are killed by a former or current intimate partner in the United States. Context is important. If a victim is so terrified of her abuser, and a judge agrees and gives her a protection order that forbids the abuser from even going near the victim—that context matters. So when he goes home and writes a “lyric” about how a protection order will not stop a bullet and posts it online so everyone can see—that is terrifying. This isn’t a Taylor Swift breakup song. This is a threat.

Freedom of Speech

Those in defense of Elonis argue that the intent matters for a variety of reason. The central argument is that if threats are assessed by the victim’s perception of the threat rather than the person’s intent when making the threat, it could chill freedom of speech. In ACLU’s brief, not taking into account the speaker’s intent could result in “self-censorship [so as] to avoid the potentially serious consequences of misjudging how his words will be received.” Moreover, the arguments claim that it is difficult to assess how speech is perceived when made over the internet because the audience’s reaction to online threats could be interpreted in so many different ways.

The Marion B. Brechner First Amendment Project claims that requiring proof of intent is necessary because Elonis’ statements were artistic rap lyrics. The inflammatory and violent statements, while distasteful, were artistic self expression. Those who are unfamiliar with the rap genre, the argument in the brief asserts, could hold negative stereotypes and “falsely and incorrectly interpret them as a threat of violence of unlawful conduct.”

Online Threats and Domestic Violence, Stalking, and Violence Against Women

Their arguments only work if you lived in a world where outlandish speech is only made in the name of art or political speech. In the real world, online threats, particularly against women or intimate partners, are not artistic or political speech. It is violent speech that terrorizes victims. Regardless of whether the abuser or stalker intends to blow up the victim’s home with a mortar or cut her up until she’s soaked in blood and dying (another of Elonis’ online posts), he is accomplishing one of his goals, which is to terrify the victim.

Threats against women cannot be minimized because they happen online. Or because the abuser hasn’t yet carried out the crime. Or because we’re worried that enforcing consequences for these threats will cause people to feel less free to speak their minds and hinder freedom of speech. In the context of domestic violence and stalking, threatening language is threatening. Needing to show subjective intent would make it more difficult to hold abusers and stalkers accountable for terrifying victims and would imply that it’s okay to make such threats, as long as, you know, you didn’t really mean it.

Read our brief for the Elonis v. United States case here

Read our official press release statement here.

Cybersecurity & Violence Against Women

In addition to being Domestic Violence Awareness Month, this month is also Cybersecurity Awareness Month. When we think about cybersecurity, we often think of security from identity theft, fraud, phishing, or hackers who steal passwords and information. But cyber – or online – security has a broader meaning for victims of domestic and sexual violence and stalking. Cybersecurity also means personal safety – safety from harm, harassment, and abuse while online.

For many survivors, being online can feel unsafe because the abuser or stalker is accessing their online accounts to monitor their activities; posting harmful and negative things about them, including sexually explicit images and personally identifying information; or using cyberspace to harass and make violent threats under the cover of “anonymity.” Abusers and stalkers often compromise the security of survivors’ technologies by installing monitoring software on cell phones or computers or forcing them to reveal passwords to online accounts.

In a study conducted by the National Network to End Domestic Violence, victim service providers report that of the survivors they work with 75% have abusers who access their online accounts, 65% have abusers who monitor their online activities, and 68% have had their pictures posted online by the abuser without their consent. In a survey by the Cyber Civil Rights Initiative, when abusers and stalkers distribute sexually explicit images of victims, 59% includes the full name of the victim, 49% include social media information, and 20% include the phone numbers of the victim. Online harassment, in the context of abuse and stalking, can have serious and dangerous consequences.

So this month, as Domestic Violence Awareness Month and Cybersecurity Awareness Month coincides, let’s think about cybersecurity and safety beyond safely making an online purchase but how we can create an environment where all can be personally safe from violence while online. How do we create a safe online space that doesn’t tolerate abuse? How do we support those who are victimized online, whether their ex is making threats via social networks, or someone is distributing sexually explicit images of them online, or they’re being threatened by a group of strangers online simply because they have an opinion about gender and dare to be in a male-dominated space? And how do we hold accountable those who are threatening, abusing, and harassing victims online?

This month—and all months—help us figure out the answers to these important questions. Comment below if you have thoughts or ideas.