Privacy Risks and Strategies with Online Dating & Gaming

Both online dating and online gaming are fast-growing industries that are increasingly becoming a regular part of life. Online dating has rapidly gained in popularity as a common way to connect to potential dates or find a partner. And, contrary to popular perception, online gaming is not just a pastime for teenage boys. Many people have concerns about the safety of online dating, often due to widely publicized stories of assault and abuse, and unfortunately, online harassment is an all-too-common experience while playing games online, that can also cross into real life.

Everyone should be able to be online safely, free from harassment and abuse, and that includes dating and gaming. For survivors of domestic violence, sexual assault, and stalking, privacy and safety concerns may be even greater when trying to engage in online spaces. Fortunately, it is possible to increase privacy and safety when dating and gaming online.

Two new resources from Safety Net discuss both risks and strategies, for survivors who want to be active in online dating or gaming communities.

Harassment, threats, and abuse that happen “only” online should be taken seriously. Such experiences can be traumatizing, and may include financial crime or identity theft. Victims report efforts to ruin their reputations and drive them from the online community. If enough identifying information is known, the abuse can also quickly become an offline threat.

If you are concerned about online harassment or abuse, see our Survivor Toolkit for more information about Online Privacy & Safety Tips, guides to Facebook and Twitter, and for resources to assist in documenting abuse.

Online harassment and abuse may fall under a number of crimes, depending on what is happening. To learn more about laws in your state on online harassment, visit

Data Privacy Day: The Gold Standard for Protecting Survivor Privacy

data privacy

When thinking about domestic violence victims, data privacy isn’t the first thing that comes to mind for most people. But here at Safety Net, it’s always a top priority for us, and we spend a lot of time helping local domestic violence programs and other victim service providers understand the impact that their use of technology can have on the privacy of the survivors they work with.

Understanding what real data privacy looks like can be complicated. As we move ever more rapidly into a technology-driven world, local domestic violence programs are under increasing pressure to join in and adopt new technologies. There are many benefits to this – it means that survivors have new ways to find help that are often easier (and in some ways safer) than making a phone call or showing up at the front door, and it means the administrative work programs have to do can become more streamlined, giving them more time to spend helping those they are there to assist. But as with everything related to domestic violence, there are major risks involved in the use of technology that must be considered and minimized before moving forward.

Let’s start with why data privacy is so important. When survivors seek help, they take huge personal risks. If their abusive partner finds out they’ve asked for help, the abuse often escalates. They also face the possibility of harmful social and economic repercussions, like housing discrimination, job loss, and exclusion from their family or community. The information victims share with the domestic violence programs is often incredibly sensitive, and if others gain access to it, it can be used to cause further harm to them. This is why the Violence Against Women Act (VAWA) requires such stringent confidentiality practices – well beyond what the more widely known HIPAA practices require. (Learn more about this in our HIPAA/VAWA/VOCA FVPSA Privacy Comparison resource.)

Domestic violence programs often ask us to help them learn and understand best practices related to data privacy and online services. A practice we are constantly encouraging programs to look at is the use of zero-knowledge encryption services. When we suggest that as the best option for confidentiality, many want to know “But what does that even mean?!” Well, zero-knowledge encryption is the best way to ensure that the information being sent between the survivor and the program, or the information that is being stored in the cloud by the program, is protected against all third-party access (a third-party is anyone who is not the victim or the program that is helping them out).

When a domestic violence program uses cloud-based services, they are essentially storing the information they are collecting at an outside location. And it is standard practice for most cloud-based companies to have access to the data that is being stored. This means that if they choose, they can go in and read all of the information the domestic violence program has stored about the victims they are working with. But when a software company uses zero-knowledge encryption, even THEY can’t see the data.

Here’s a helpful analogy for understanding how zero-knowledge encryption works: Imagine a physical storage company where you can rent a vault to store your organization's paper files. When you go there to rent a vault, they let you know that you will be the only one who has a key to your vault, and that there is no way to get into the vault without that key. The vault can't be broken into. And the storage company does not have an extra copy of the key. No one but you, or someone you give the key to, can get into the vault. This is what zero-knowledge encryption does for survivors' data. It ensures that only the domestic violence program has the key to unlock and access the data they have entered about survivors. This is why we consider this the gold standard of data protection, and the one that most clearly aligns with VAWA confidentiality obligations. Software companies are third parties. And they get approached by other third parties - like law enforcement and abusers' attorneys - to share the data stored on their servers. If the software company can't see the data, and they can't hand it over to others who might use it to harm the survivor, the privacy and safety of the survivor is much more secure. 

If you have questions about this, feel free to reach out to us. To learn more about privacy and confidentiality, check out our Technology & Confidentiality Toolkit.


FTC Revenge Porn

January 2017

FTC’s Complaint Against is a Win Against Nonconsensual Disclosures of Intimate Images

NNEDV applauds the Federal Trade Commission (FTC) and the state of Nevada for filing a complaint against a notorious website to help protect survivors of nonconsensual disclosures of intimate images, what is commonly referred to as “Revenge Porn.”[1], like many similar websites, is dedicated to the deeply damaging practice of soliciting intimate images and providing a space and impunity for individuals to post intimate images without consent. Many of these sites fully recognize the impact of the distribution of these images and therefore have monetized the suffering of those depicted in the images by charging hundreds or thousands of dollars to remove the images from their website. Websites that employ these tactics enhance the ability of abusive individuals to terrorize their victims by soliciting and widely disseminating nonconsensual images and then blackmailing individuals that are desperately seeking to get the images removed from the website. While there are still many more sites that engage in these deplorable practices, the FTC and Nevada have taken a step to combat an egregious example and in so doing have also provided notice to others about the consequences of running these enterprises. In the current case, one executive of the company that runs has already agreed to a fine and to comply with a ban on posting intimate images. The website itself is still online, but the complaint is still pending and could result in large fines for the company and other members of the executive team. We often hear about the many ways in which individuals are terrorized on the web, but NNEDV is encouraged by the steps taken by the FTC and Nevada and hope that other states will follow their lead in working to combat the nonconsensual disclosure of intimate images.

For more information about responding to nonconsensual disclosure of intimate images, check out our survivor toolkit and/or advocate toolkit.


[1] NNEDV and many advocates are against the term “revenge porn” because we believe it inaccurately describes the practice of nonconsensual disclosure of intimate images. While some individuals make nonconsensual disclosures for revenge, many perpetrators have a mix of motivations that may or may not include revenge. Furthermore, by calling these images “porn” it inappropriately suggests that those depicted are a part of a pornography industry, when in fact the disclosure of these images is a crime in most states. Nonconsensual disclosure of intimate images more accurately describes the panoply of motivations and provides a better description of these images.

Addressing Technology Misuse in the Context of Sexual Assault

Two new resources from Safety Net discuss Technology Misuse in Sexual Assault, and offer advocates and others working with survivors a tool for Assessing Technology Misuse and Privacy Concerns.

As technology becomes woven into every aspect of society, offenders misuse the technology in sexual assault. Just as the dynamics of sexual assault differ from domestic violence, the misuse of technology looks different when sexual assault occurs outside of an intimate partner relationship.

  • A youth group leader might misuse online communities to groom victims.
  • A supervisor might threaten to change an employee’s file in a company database.
  • A caretaker might limit access to help-seeking through technology.
  • A medical provider might threaten to share embarrassing information or images gathered in the course of treatment.
  • Surveillance cameras and security could be misused by a landlord to gain footage of or access to a victim.
  • A law enforcement officer could misuse a database to target potential victims.

More understood examples include the explosion in the production and sharing of child pornography, or nonconsensual sharing of intimate images or footage of sexual assault of adults over the Internet.

Privacy Concerns

In addition, sexual assault cases in the public eye can generate distressing comments on news stories and social media, and some survivors may become the target of online harassment, doxing or other retaliation.

Technology and Root Causes

Online spaces amplify existing attitudes and beliefs, and so can support rape culture through memes, viral posts, revenge porn sites, etc. At the same time, online advocacy and activism efforts have used online spaces to counter rape culture through awareness, events, bystander intervention and more.

Emergency SOS on Apple iOS 11: Safety Features and Security Concerns

Apple recently unveiled its newly updated operating system - iOS 11 - for iPhone, iPad, and iPod Touch. The operating system offers a variety of new tools that will impact the lives of survivors of domestic violence. This two-part blog series features two of the new tools. Recently we released a blog about the screen recording feature. Today, we are featuring the Emergency SOS calling feature.

The new Emergency SOS feature in iOS 11 allows iPhone users to call emergency services, with the added options to alert trusted contacts via text message that the emergency call was placed, and to send those contacts updates on the user’s current location. You can do all of these things without unlocking the phone. As with all technology, it’s important to look closely at both the benefits and how the new feature  may impact a safety planning process. If you’re at risk of abuse from somebody known to you, it’s very important to know that the Emergency SOS button may be a part of a safety plan, but it is not a substitute for a safety plan. In this blog, we’ll take you step-by-step through the process of setting up this feature and offer key safety considerations for survivors who are interested in using it.

The Emergency SOS feature is not a new feature for smartphones, but this is the first time a tool like this has been available as a part of the iPhone's operating system. The Samsung Galaxy S6 and S7, and other Android-based devices have had this feature built into the operating system for several years. Similar tools that allow a user to quickly contact trusted contacts and emergency services have also been available as third-party apps in the Apple App and Google Play Stores. For a review of personal safety apps, please take a look at Safety Apps: Getting Help During an Emergency page of our App Safety Center.


If you want to activate or test the Emergency SOS feature, you can do so by going into the Settings app of your device. Once there, scroll down until you see the Emergency SOS tab, which currently looks like the image below. Clicking on the Emergency SOS tab will take you to the set-up page. Information will appear that will assist you in setting up the feature.

Activating emergency sos mode in ios 11

Triggering Emergency SOS

The buttons you press to trigger emergency SOS depends on which phone you have.

  • For iPhones 7, 7+, and older, press the button on the right side of the device rapidly 5 times to activate Emergency SOS.
  • For iPhone 8, 8+, and X, press and hold the button on the right side while at the same time pressing and holding one of the two volume buttons.

Set Up Options

There are two modes for Emergency SOS. For the sake of this blog, we will call them the default mode and the auto-call mode.

Default Mode: If you activate Emergency SOS without making any changes, the feature will be in default mode.  After the Emergency SOS feature has been triggered in default mode, a screen will appear with several different buttons that the user can slide to call Emergency SOS, access Medical ID, power off the device, or cancel the triggered feature (in case it was accidentally triggered). This is what you will see on the device:


Key consideration: The default mode is helpful for avoiding accidental calls to emergency services, but it also means you will have to look at the phone to actually place the call, which may raise the suspicions of the abusive person.  

Auto Call Mode: Alternatively, auto-call mode can be turned on so that once the Emergency SOS feature has been triggered, a call will automatically be placed. The call will not be placed for 3 seconds, which gives the user an opportunity to cancel the call.

Key consideration: It's important to note that by default, auto-call mode will sound a loud, siren-like alarm when Emergency SOS is triggered - however the sound feature can be turned off. Once the auto-call mode is triggered, the alarm sounds and a short, 3-second countdown appears on the screen period where the 911 call can be canceled. After the 3 seconds have passed, the call will be placed. This countdown was designed to give the user a chance to cancel the call if it was triggered accidentally, and to draw the attention of anyone in the area. While sounding an alert can be helpful for avoiding accidental 911 calls or for drawing public attention during incidents of violence, it will also alert an abusive person that it’s been triggered, giving them time to grab the phone from the victim and possibly cancel the call. Particularly in cases of domestic violence, alerting the abusive person may escalate violence. Thankfully, Apple offers a way to disable the warning sound. This can be done by scrolling down to the bottom of the Emergency SOS settings page and toggling off the Countdown Sound button (see images below).


 Emergency Contact Notification & Location Sharing

Within the Emergency SOS settings, you also have the option of selecting multiple emergency contacts to receive notice of the emergency. These contacts will be taken from contacts that you set up in the Medical ID feature on iOS. Once emergency contacts are selected, if you activate the Emergency SOS feature and a call to 911 is placed, each of these contacts will receive a text message notifying them that you have contacted emergency services and providing them with your current location. There is approximately a ten second time period where you can cancel the text message notification before it is sent. Additionally, if your location services are turned off, the iPhone will temporarily turn them on. If your location changes after the initial text message is sent, your contacts will receive ongoing SOS location updates as you move around. Your phone will have a blue alert at the top of the screen that alerts you to the fact that your location is being shared. You will also receive a reminder on your phone after 10 minutes have passed that lets you know that your emergency contacts are still receiving emergency location updates. You can stop sharing the updates at any time, and will be reminded every 4 hours that they are still sending updates. You will no longer be reminded after 24 hours have passed.

Your contacts will receive a text message similar to the image below.


Key Considerations: If the abusive partner gets your phone as the call is in process and hangs it up, they may be able to cancel the emergency contact notification system before it engages. Additionally, you can only assign emergency contacts that are a part of your contact list. This can be a problem if you want to use a number that you need to keep discreet and separate from your contact list. One possible work around would be to assign the number under a fake name that you aren’t worried about your partner seeing – then you would just need to make sure to remember who the number actually calls. There may be a way to hide contacts through iCloud and the default contacts app, however hiding is not likely to be of much assistance because you will still have to have the individual emergency contacts visible within the Medical ID feature in order to use them for Emergency SOS. If your contact does not have good service at the time of the message they may not receive the exact location or location data at the same time that it is sent. This means it's important to consider someone's availability and phone reception when deciding if you want to include them as an emergency contact. 

 Overall, as with any technology, there are benefits and risks for safety and privacy. If this feature is something you think may increase your safety, give you options for communicating emergency needs quickly, or simply give you peace of mind, learn as much as you can and test it out so you’re comfortable with relying on it in the case of an emergency.

 Have questions about this or how other technologies impact victims of domestic violence? Reach out to us!

This project was supported by Grant No. 2016-TA-AX-K069 awarded by the Office on Violence Against Women, U.S. Department of Justice. The opinions, findings, conclusions, and recommendations expressed in this program are those of the author(s) and do not necessarily reflect the views of the Department of Justice, Office on Violence Against Women.

Screen Recording on Apple iOS 11: Safety Features and Security Concerns


Apple recently unveiled its newly updated operating system - iOS11 - for iPhone, iPad, and iPod Touch. The operating system offers a variety of new tools that will impact the lives of survivors of domestic violence. This two-part blog series will feature two of the new tools -  a screen recording feature and an Emergency SOS calling feature. In today’s blog, we will focus on the new screen recording tool.

As with most technologies, the iOS11 updates have potential to both help survivors, and to be misused by abusers. The screen record feature in iOS11 is a perfect example of a technology that has a mix of safety potential and privacy concerns.

While screen recording is new to iOS11, it isn’t actually a new feature for smartphones. Many devices that use Android operating systems have had the ability to record what’s happening on the screen for some time. Similarly, Apple users were able to record what was happening on the phone with a workaround that included plugging the phone into a computer. But while screen recording isn’t new, Apple has simplified the process, which means that survivors can now more easily record video of abusive behavior, like harassing text messages or threats made over video calls. (For more information on how to document abuse, check out our Documentation Tips resource.) Unfortunately, it also means it’s now easier for abusive people to make recordings that they can use maliciously as a tactic of abuse.

One major concern is that the new screen record button will allow individuals to secretly record Snaps sent using Snapchat. One of the primary selling points of sending a Snap is that it automatically disappears after a person sees it. Previously, the only way for someone who receives a Snap to keep a copy of it was to take a screenshot. To protect against privacy concerns related to screenshots, Snapchat created a feature that informs the sender if a screenshot was taken of their Snap. But the new screen record button is able to record Snaps without alerting the sender.

While this may help survivors of domestic violence document abusive Snaps, it can also be misused by an abusive person, particularly because many people use Snapchat to discuss sexual topics and share intimate images. If these images can be secretly captured, it’s more likely that an abusive person can keep them without the victim’s knowledge and later use the recordings to threaten, blackmail, or otherwise harm the sender.

IMPORTANT: Snapchat is attempting to fix the issue in its latest software update, but the screen record button will still be able to secretly record Snaps if the sender has not installed the latest version of Snapchat.  


  • If you use Snapchat, make sure you have the latest update installed.
  • If you use an Apple device, learn how to use the screen record button after you install iOS11.
  • Learn more about documenting abusive behavior and talk to a local advocate if you think you’re experiencing abusive behavior (you can find services near you by calling the National Domestic Violence Hotline).
  • If you’re trying to use the screen record button to record a Snap in order to document abusive behavior by the other person, just know that it’s possible that the other person may know you made a recording. We recommend being careful before recording abusive Snaps because it is possible that the abusive person could be made aware that you have recorded the abusive behavior, which may place you in danger.
  • Recording another person (in person, on the phone, or on a video call) is illegal in some states if you do not receive permission. If you do decide to use the recording feature to record another person, it is important that you comply with your state’s recording laws. Check here to learn about your state’s recording laws.

Also – always remember that it’s never ok for someone to take pictures or videos of you without your consent, coerce you to take and send images or videos, or keep images or videos you send in private when you have an expectation that they have been deleted. If you are concerned that somebody has inappropriately taken or retained pictures or videos of you, please contact us at or reach out to the Cyberviolence Civil Rights Initiative.

This project was supported by Grant No. 2016-TA-AX-K069 awarded by the Office on Violence Against Women, U.S. Department of Justice. The opinions, findings, conclusions, and recommendations expressed in this program are those of the author(s) and do not necessarily reflect the views of the Department of Justice, Office on Violence Against Women.


Safety Check

If you think your activities (online and offline) are being monitored, you are probably right. People who are abusive often want to know their victim’s every move and interaction. If this is something you’re experiencing, it’s important to think through how they might be tracking your online activity. These tips can help you think through how to access information online more safely:

  • Computers, mobile devices, and online accounts store a lot of private information about what you view online – the websites you visit (like this one), the things you search for, the emails and instant messages you send, the online videos you watch, the things you post on social media, the online phone or IP-TTY calls you make, your online banking and purchasing, and many others. 
  • If your mobile device or computer are easily accessible to the abuser, be careful how you use it. You may want to keep using those devices for activities that won’t trigger violence – like looking up the weather – and find safe devices (like a public computer at the library) to look up information about how to get help.
  • If the person who is abusive has access to your online accounts (social media, email, phone bill, etc), or has had access to them in the past, it is often helpful to update the usernames and passwords for those accounts from a safer device.
  • You can also set up a new email address that they aren’t aware of, and connect your online accounts to it (rather than the old email address they know). It can be helpful to make the new address something that is more anonymous, instead of using your actual name or a handle you are already known by.
  • Keep in mind, if you think you are being monitored, it might be dangerous to suddenly stop your online activity or stop them from accessing your accounts. You may want to keep using those devices or accounts for activities that won’t trigger violence – and find safer devices (like a public computer at the library) and accounts to look up information about how to get help, or to communicate with people privately.
  • Email, instant messaging and text messaging with domestic violence agencies leaves a detailed digital trail of your communication, and can increase the risk that your abuser will know not only that you communicated, but the details of what you communicated. When possible, it’s best to call a hotline. If you use email, instant messaging, or text messaging, try to do so on a device and account that the abuser doesn’t know about or have access to, and remember to erase any messages you don’t want the abusive partner to see.

Check out NNEDV’s Technology Safety & Privacy Toolkit for Survivors for more important information.

This project was supported by Grant No. 2016-TA-AX-K069 awarded by the Office on Violence Against Women, U.S. Department of Justice. The opinions, findings, conclusions, and recommendations expressed in this program are those of the author(s) and do not necessarily reflect the views of the Department of Justice, Office on Violence Against Women.


So, You Wanna Build an App? App Security

This post is part of the “So You Wanna Build an App” series. The other posts include: “What to Consider Before Developing an App,” “Know Your Audience,” and “Safety First.” This series is based on lessons we learned when developing the NNEDV Tech Safety App, and in reviewing dozens of apps created for victims of domestic violence, sexual assault, and stalking. Our reviews can be found in the App Safety Center.

 In the “Safety First” post, we talked about how to minimize risks for users when you build the app. Another concern that app developers must be aware of is security—both security of the app itself and security of the data that the app collects from users.

Minimize User Data & Secure What You Store

User data can include anything from asking users to create an account with a username and password to asking users to upload and store evidence of abuse. The first step to data security is to only collect the information needed in order to provide the service. Don’t ask for data you don’t need. For example, some apps require users to create an account when there is no obvious need for an account. Other apps require access to information on the device, such as the user’s contact list and calendar, even when that information has no relevance to the functionality of the app.

Also remember that some types of data are more sensitive than others. Sensitive data includes personally identifying information like name, birthdate, location, health/mental health information, and documentation of abuse. The exposure of sensitive data can have dangerous consequences for the survivor if it’s discovered by the abuser. For this reason, securing sensitive data from unintentional disclosure is crucial.

Develop your app in a way that doesn’t require users to share personal information, or that offers users multiple ways they can opt into or out of sharing personal information. For example, some safety apps allow users to contact someone through the app. Develop the app in a way that lets the user manually type in the contact information, rather than requiring that the app be connected to their contact list. Also remember - if your app is designed so that it can inform 2 or 3 contacts when the survivor needs help, the app does not need access to the entire address book. This is also helpful, because some users may want to input a safety contact, such as their domestic violence advocate or private attorney, who isn’t in their contact list.

App Security

For apps that collect no or minimal data from their users, the security issues are more about the app itself. Some apps are built to function fully on the device, where all the content is accessible via the downloaded app. Other apps require users to retrieve information online. Depending on how the online content is hosted, if someone was covertly watching the internet traffic, they might be able to find out the names of the websites and other content that’s being accessed. Think about where your online content is hosted and how that information is retrieved. As an example, in order to protect survivors, all of the videos on our Tech Safety App are hosted on a secure server, and the files are named in a way that obscures what they are in case someone is covertly watching the internet traffic.

Have a Security Framework and Policy

Anytime you ask users to share personal information with you, you need to know (and let them know) how you’ll keep that data secure. The security framework should encompass every level of engagement – from the time they share their information (account creation, uploading/downloading content) to when you store that information (on secure and encrypted servers) to how (and how often) you destroy content. Your security policy should be clear, and posted where users can easily review. It should also be very clear about when and how you might share their information with third parties such as law enforcement or courts.

Educate Users on Security

If your app encourages people to use third-party cloud storage like Dropbox to store personal information gathered via your app, provide tips and education on good security practices. Where appropriate, teach users to use strong passwords and multi-factor authentication. The better they understand the risks, and how to minimize those risks, the better they can navigate them and develop stronger safety strategies.

Thanks for reading this blog series! If you’re still curious for more, you can find great information on our website:

·       Technology Safety and Privacy: A Toolkit for Survivors

·       Agency’s Use of Technology: Best Practices & Policies

·       App Safety Center

Speaking of apps – check out NNEDV’s Tech Safety App! DC-based company 3Advance developed the CMS infrastructure and created the multi-platform mobile apps to bring to life the NNEDV Tech Safety App. If you’re an app developer or a victim service provider working with an app developer, be sure to check out our Considerations for App Developers resource!

So, You Wanna Build an App? Safety First

This post is part of the “So You Wanna Build an App” series. The other posts include: “What to Consider Before Developing an App,” “Know Your Audience,” and “App Security.” This series is based on lessons we learned when developing the NNEDV Tech Safety App, and in reviewing dozens of apps created for victims of domestic violence, sexual assault and stalking. Our reviews can be found in the App Safety Center.

Minimizing safety risks for victims of abuse who use your app is a daunting but crucial process. Remember that survivors may be in crisis, in danger, or have someone monitoring their device when they’re using your app. This post discusses how you can address and minimize some of these safety risks.

Your App Could Be a Safety Risk

Victims of abuse are most at risk when they attempt to leave their abusive partner or try to limit the abuser’s control. Simply having a safety app on their device could indicate that the victim is seeking information or help, and the abuser could escalate his/her control and abuse. While you can’t remove that risk entirely, it’s important to consider ways you can address and minimize those risks.

Inform the User

The first step is to inform the user of possible dangers and risks they might face if they download your app. Some survivors may be aware that their devices are being monitored and know to be careful about what they download, but others may have never thought that about risk before, and may not have considered that the abuser may see the app and discover that they are seeking help.

This reminder should take place before they download the app. It should be noted in the app store description, and in other places that describe the app. For example, the Tech Safety App provides notices about potential monitoring by abusive partners and suggests that users only access the app from a safer device. These notices are available on the app’s informational website, in the app description in both the Apple App & Google Play stores, and as part of the onboarding process after someone downloads the app. These reminders both inform potential users of the related risks when downloading the app, and encourages them to wait until they are on a safer device.

Other Safety Strategies That May or May Not Work

·       Quick Escape – Most websites for survivors of abuse have a “Quick Escape” or “Exit” button so that they can leave the site quickly if they’re worried that someone is monitoring their internet use. However, this can be a challenge for apps, since having an exit button can take up valuable screen space. It’s also unnecessary because it’s often very easy to quickly close an app. Since building an “Exit” button throughout an app isn’t practical, the best way to inform users of possible monitoring is to inform them before they download the app.

·       Disguised Apps – Some apps have been designed to look like something else, such as a news app or a calculator, but are actually apps to help domestic violence or sexual assault survivors. While it might be helpful for the icon to be disguised so that it doesn’t raise the suspicions of an abusive partner, there can also be significant challenges with this strategy. The Apple App Store doesn’t allow these types of apps, or they require an explanation of what the app actually is in the app description, which may defeat the purpose of it being disguised. App users also won’t be able to find the app unless they know exactly what it’s called and what the icon looks like. If the icon changes as a part of the update process and the survivor doesn’t notice, this may make the app hard to find, or may lead to accidental deletions. Survivors may also forget the fake name if they download the app and don’t use it regularly, making it difficult to find in a time of crisis.  Moreover, if someone happens to open the app on the phone, they’ll know that it isn’t whatever the app is pretending to be.

In some cases, app developers may actually build the disguised app and hide domestic violence/sexual assault content within the app. While this might minimize the risk of someone opening the app and immediately seeing the domestic violence/sexual assault content, it might be harder for users to access hidden content easily and quickly.

·       Passwords – Some apps will use a password to protect the app (or parts of the app) so that only someone with the password can access it. This strategy does work to a certain extent, particularly if there’s private or sensitive information the survivor wants to keep protected in case someone goes through the device. Just keep in mind that a password protected app might raise the suspicions of the abusive person if he or she is used to having full control over the device. This strategy might be best for someone whose abuser generally doesn’t have access to the device, but who wants additional privacy protection for the information she/he is accessing or storing. Having this as a security option rather than a default setting can be helpful for survivors, because it lets them individualize the app based on their unique circumstances.

Be Aware of Unintentional Access to App Content

There are many ways that app content can be accessed without the knowledge of the survivor, simply by the way the device may be connected to other technologies. For example, some devices are set up to automatically connect to smart TVs, speakers, or cars via Bluetooth. If your app contains multimedia, build the app so that files don’t automatically start playing when the device connects to a speaker or other technology. Also consider naming multimedia files in a way that doesn’t reveal anything if someone happens to see the file name on a media player.

Safety and Privacy When Collecting Sensitive Information

Some safety apps encourage users to store personal information either on the app itself or to the cloud via the app. This might include contact information, a journal logging the abuse, and photographic/video/audio evidence of abuse. It’s critical that users of these apps are notified of the related safety risks involved in storing information this way. If the information is stored on the device, users should be warned that anyone with access to the device might be able to see the content.

Additionally, if your app collects and stores any private information connected to its users, you should have a privacy and security policy that clearly explains what information the app is collecting, why it is being collected, and who has access to it. If your app is using a third-party service to store the information, or if it shares the information with another company, it’s vital to let users know how to find that third-party’s privacy and security policies.

In cases where personal information is being stored on the user’s own cloud-based service, such as Dropbox, they should be notified of the related privacy and security risks. Many users don’t know how easily cloud-based services can be accessed. If the abusive person knows the victim’s password or has access to a device the account syncs with, all of the information stored could be easily accessed, manipulated, or deleted. If your app encourages users to use their personal cloud storage service, provide them with information about how they can increase their privacy and security when using these services.

Speaking of apps – check out NNEDV’s Tech Safety App! DC-based company 3Advance developed the CMS infrastructure and created the multi-platform mobile apps to bring to life the NNEDV Tech Safety App. If you’re an app developer or a victim service provider working with an app developer, be sure to check out our Considerations for App Developers resource!

So, You Wanna Build an App? Know Your Audience

This post is part of the “So You Wanna Build an App” series. The other posts include: “What to Consider Before Developing an App,” “Safety First,” and “App Security.” This series is based on lessons we learned when developing the NNEDV Tech Safety App, and in reviewing dozens of apps created for victims of domestic violence, sexual assault, and stalking. Our reviews can be found in the App Safety Center.

 If you’re building an app for survivors of abuse, your mantra should always be: first, do no harm. Survivors of abuse may be using your app in the middle of a crisis, or while looking for help to escape a violent situation. Although you can’t predict how someone will use your app, you can minimize harm by building an app that takes the unique needs of your audience into consideration. Below are some tips for doing just that.

Don’t Create a False Sense of Security

Because survivors may rely on your app to help them find safety or to get time-critical information, the app needs to work as it’s intended. Unfortunately, many apps that have been created for survivors are so complex they often don’t work the way the designers intended. We tested dozens of apps whose sole promise was to locate a victim when they’re in danger, but many of them didn’t always show exact location. Sometimes it was off by a few houses and sometimes it was off by a few miles. If your app promises personal security and safety as a key function, you have to make sure it works accurately every time, and in every environment (rural/suburban/urban).

Don’t Overpromise

Carefully market your app, and be sure not to imply that it does more than it’s actually able to. We’ve seen many safety apps created for victims of abuse that are marketed with claims that are blatantly false, and that (unethically) try to appeal to the victim’s need for safety. Some of these marketing ploys include: “#1 Prevention of Sexual Assault!”, “You’ll never be in danger again,” and “It’s like having a police officer in your pocket.” Even though the developers may have had good intentions, not only are these claims unrealistic, they can be dangerous if someone were to accept them as true.

Such claims may keep the user from thinking through other safety measures they could take. If users believe that your app is the only safety strategy they need, you’ve likely created a false sense of security that can result in unintended danger to the victim. Moreover, if someone has the app and does get assaulted, it can contribute to victim blaming – accusations that the victim had a safety app that could have prevented the assault, if only they’d used it properly. Simply put: don’t tell victims the app will keep them safe. There is no app that can stop an abusive partner from trying to harm their victim – the only thing that can stop that from happening are abusers themselves.

Be Accurate About Your Information

Because the app is created for someone who might be in danger, make sure that the information in your app is accurate. Resources should link to accurate phone numbers or websites, and be appropriate for your intended audience. For example, if your app is for victims of domestic violence, list local domestic violence programs in the resources section, rather than listing general health services. Remember that even if your app is meant for a specific location (such as your city) or population (such as teens), anyone can download the app, so resources should be applicable to all users (you can include the National Hotline in addition to the local hotline numbers), or clearly state who can use the resources. Double and triple check the information you’ve listed (websites, phone numbers, and other contact information) to make sure it’s correct, and make this a part of your ongoing maintenance plan.

Be Accurate in Your Language

If you don’t have expertise in the dynamics of domestic violence, sexual assault, or stalking, work with experts in those respective fields to develop your content and to ensure your language is correct and appropriate. Domestic violence, sexual assault, and stalking are all very nuanced issues. Users of your app could be in a traumatized state of mind when they’re using your app, and your content needs to be sensitive. Victims may not yet have the words or definitions to explain what they’re experiencing, and the way you describe it may have a major impact on their understanding. Work with domestic violence, sexual assault, and stalking victim experts to help you write content that is appropriate.

 Speaking of apps – check out NNEDV’s Tech Safety App! DC-based company 3Advance developed the CMS infrastructure and created the multi-platform mobile apps to bring to life the NNEDV Tech Safety App. If you’re an app developer or a victim service provider working with an app developer, be sure to check out our Considerations for App Developers resource!