Protecting Our Community from Abuse on Instagram

Today, we’re announcing new features to help protect people from abuse on Instagram:

  • The ability for people to limit comments and DM requests during spikes of increased attention
  • Stronger warnings when people try to post potentially offensive comments
  • The global rollout of our Hidden Words feature, which allows people to filter abusive DM requests 

We have a responsibility to make sure everyone feels safe when they’re on Instagram. We don’t allow hate speech or bullying on Instagram, and we remove it whenever we find it. We also want to protect people from having to experience this abuse in the first place, which is why we’re constantly listening to feedback from experts and our community, and developing new features to give people more control over their experience on Instagram, and help protect them from abuse. 

Easily Preventing Unwanted Comments and DMs with Limits

Screenshot of Limits

To help protect people when they experience or anticipate a rush of abusive comments and DMs, we’re introducing Limits: a feature that’s easy to turn on, and will automatically hide comments and DM requests from people who don’t follow you, or who only recently followed you. 

We developed this feature because creators and public figures sometimes experience sudden spikes of comments and DM requests from people they don’t know. In many cases this is an outpouring of support — like if they go viral after winning an Olympic medal. But sometimes it can also mean an influx of unwanted comments or messages. Now, you can turn on Limits and avoid it.

Our research shows that a lot of negativity towards public figures comes from people who don’t actually follow them, or who have only recently followed them, and who simply pile on in the moment. We saw this after the recent Euro 2020 final, which resulted in a significant and unacceptable spike in racist abuse towards players. Creators also tell us they don’t want to switch off comments and messages completely; they still want to hear from their community and build those relationships. Limits allows you to hear from your long-standing followers, while limiting contact from people who might only be coming to your account to target you.

Limits will be available to everyone on Instagram globally from today. Go to your privacy settings to turn it on or off, whenever you want. We’re also exploring ways to detect when you may be experiencing a spike in comments and DMs so that we can prompt you to turn on Limits. 

Showing Stronger Warnings to Discourage Harassment

Screenshot of warning when someone tried to post a potentially offensive comment

We already show a warning when someone tries to post a potentially offensive comment. And if they try to post potentially offensive comments multiple times, we show an even stronger warning — reminding them of our Community Guidelines and warning them that we may remove or hide their comment if they proceed. Now, rather than waiting for the second or third comment, we’ll show this stronger message the first time.

Combating Abuse in DMs and Comments

To help protect people from abuse in their DM requests, we recently announced Hidden Words, which allows you to automatically filter offensive words, phrases and emojis into a Hidden Folder, that you never have to open if you don’t want to. It also filters DM requests that are likely to be spammy or low-quality. We launched this feature in a handful of countries earlier this year, and it will be available for everyone globally by the end of this month. We’ll continue to encourage accounts with large followings to use it, with messages both in their DM inbox and at the front of their Stories tray.

We’ve expanded the list of potentially offensive words, hashtags and emojis that we automatically filter out of comments, and will continue updating it frequently. We recently added a new opt-in option to Hide More Comments that may be potentially harmful, even if they may not break our rules.

Continuing the Fight Against Online Abuse

We hope these new features will better protect people from seeing abusive content, whether it’s racist, sexist, homophobic or any other type of abuse. We know there’s more to do, including improving our systems to find and remove abusive content more quickly, and holding those who post it accountable. We also know that, while we’re committed to doing everything we can to fight hate on our platform, these problems are bigger than us. We will continue to invest in organizations focused on racial justice and equity, and look forward to further partnership with industry, governments and NGOs to educate and help root out hate. This work remains unfinished, and we’ll continue to share updates on our progress.