Virtual Abuse

Facebook announced updates to its policies to better protect those who may be vulnerable to virtual abuse.

Facebook is intensifying efforts to make everyone feel safe to engage and connect with their communities. As the social media giant has tightened norms, improved detection capabilities, and updated tools to prevent bullying and harassment on its platform, when it does happen, the company acts. To avoid virtual abuse,Facebook removes content that violates its policies and disables the accounts of people who repeatedly break its rules.Facebook regularly pressure test these policies with the company’s safety experts, making changes as needed. This article features the top steps taken byFacebook policies to fight against virtual abuse.

On National Bullying Prevention and Awareness Day in the US, Facebook announced updates to its global bullying and harassment policies to better protect members of the community, particularly those who may be vulnerable to virtual abuse.

Combating Coordinated Mass Harassment

Facebook has launched a new policy that helps protect people from mass harassment and intimidation from multiple accounts. Facebook will now remove coordinated efforts of mass harassment that target individuals at heightened risk of offline harm, for example, victims of violent tragedies or government dissidents — even if the content on its own would not violate the company policies. Facebook will also remove objectionable content that is considered mass harassment towards any individual on personal surfaces, such as direct messages in inbox or comments on personal profiles or posts. Facebook will require additional information or context to enforce this new policy.

In addition, Facebook will also remove state-linked and adversarial networks of accounts, Pages, and Groups that work together to harass or silence people, for example, a state-sponsored organization using closed private groups to coordinate mass posting on dissident profiles.

Based on feedback from a large number of global stakeholders, Facebook will now also remove:

  • Severe sexualizing content
  • Profiles, Pages, groups, or events dedicated to sexualizing the public figure
  • Derogatory, sexualized photoshopped images and drawings
  • Attacks through negative physical descriptions that are tagged to, mentioned, or posted on the public figure’s account
  • Degrading content depicting individuals in the process of bodily functions

In addition, Facebook will remove unwanted sexualized commentary and repeated content that is sexually harassing. Because what is “unwanted” can be subjective, Facebook relies on additional context from the individual experiencing the abuse to take action. Facebook made these changes because attacks like these can weaponize a public figure’s appearance, which is unnecessary and often not related to the work these public figures represent.

Facebook also recognizes that becoming a public figure isn’t always a choice and that this fame can increase the risk of bullying and harassment — particularly if the person comes from an underrepresented community, including women, people of color, or the LGBTQ community. Consistent with the commitments made in the company's corporate human rights policy, Facebook now offers more protections for public figures like journalists and human rights defenders who have become famous involuntarily or because of their work. These groups will now have protections from harmful content, for example, content that ranks their physical looks, as other involuntary public figures do. The full list of protections for public figures, including involuntary public figures, can be found in the company Community Standards.

Consulting Community and Global Stakeholders on These Changes

In updating the company policies, Facebook consulted a diverse set of global stakeholders including free speech advocates, human rights experts, women’s safety groups and the company Women’s Safety Expert Advisors, cartoonists and satirists, female politicians and journalists, representatives of the LGBTIQ+ community, content creators and public figures. Facebook will continue to work with experts and listen to members of the company community to ensure the company platforms remain safe.

The company policies complement tools Facebook has built in the company apps to prevent, stop and report bullying and harassment online. These tools empower people to manage unwanted or abusive interactions like blocking or unfollowing someone on Facebook and Instagram and Restrict, Hidden Words and Limits on Instagram. To learn more about this work and the resources Facebook has developed with experts to reduce this type of behavior online generally, visit the company Bullying Prevention Hub on the company Safety Center, developed in partnership with the Yale Center for Emotional Intelligence.