Skip to content

Latest commit

 

History

History
2040 lines (1333 loc) · 167 KB

Community Guidelines.md

File metadata and controls

2040 lines (1333 loc) · 167 KB

Community Standards

The Community Standards outline what is and isn't allowed on Facebook, Instagram, Messenger and Threads.

Introduction

Every day, people use Facebook, Instagram, Messenger and Threads to share their experiences, connect with friends and family, and build communities. Our services enable billions of people to freely express themselves across countries and cultures and in dozens of languages.

Meta recognizes how important it is for Facebook, Instagram, Messenger and Threads to be places where people feel empowered to communicate, and we take our role seriously in keeping abuse off the service. That’s why we developed standards for what is and isn’t allowed on these services.

These standards are based on feedback from people and the advice of experts in fields like technology, public safety and human rights. To ensure everyone’s voice is valued, we take great care to create standards that include different views and beliefs, especially from people and communities that might otherwise be overlooked or marginalized.

Please note that the US English version of the Community Standards reflects the most up to date set of the policies and should be used as the primary document.

Our commitment to voice

The goal of our Community Standards is to create a place for expression and give people a voice. Meta wants people to be able to talk openly about the issues that matter to them, whether through written comments, photos, music, or other artistic mediums, even if some may disagree or find them objectionable. In some cases, we allow content—which would otherwise go against our standards—if it’s newsworthy and in the public interest. We do this only after weighing the public interest value against the risk of harm, and we look to international human rights standards to make these judgments. In other cases, we may remove content that uses ambiguous or implicit language when additional context allows us to reasonably understand that the content goes against our standards.

Our commitment to expression is paramount, but we recognize the internet creates new and increased opportunities for abuse. For these reasons, when we limit expression, we do it in service of one or more of the following values:

AUTHENTICITY

We want to make sure the content people see is authentic. We believe that authenticity creates a better environment for sharing, and that’s why we don’t want people using our services to misrepresent who they are or what they’re doing.

SAFETY

We’re committed to making Facebook, Instagram, Messenger and Threads safe places. We remove content that could contribute to a risk of harm to the physical security of persons. Content that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on our services.

PRIVACY

We’re committed to protecting personal privacy and information. Privacy gives people the freedom to be themselves, choose how and when to share on our services and connect more easily.

DIGNITY

We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others.

Community Standards

Our Community Standards apply to everyone, all around the world, and to all types of content, including AI-generated content.

Each section of our Community Standards starts with a “Policy Rationale” that sets out the aims of the policy followed by specific policy lines that outline:

Content that's not allowed; and

Content that requires additional information or context to enforce on, content that is allowed with a warning screen or content that is allowed but can only be viewed by adults aged 18 and older.

Violence and Incitement

Policy Rationale

We aim to prevent potential offline violence that may be related to content on our platforms. While we understand that people commonly express disdain or disagreement by threatening or calling for violence in non-serious and casual ways, we remove language that incites or facilitates violence and credible threats to public or personal safety. This includes violent speech targeting a person or group of people on the basis of their protected characteristic(s) or immigration status. We remove content, disable accounts and work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety. We also try to consider the language and context in order to distinguish casual or awareness-raising statements from content that constitutes a credible threat to public or personal safety. In determining whether a threat is credible, we may also consider additional information such as a person's public visibility and the risks to their physical safety.

In some cases, we see aspirational or conditional threats of violence, including expressions of hope that violence will be committed, directed at terrorists and other violent actors (e.g., “Terrorists deserve to be killed,” “I hope they kill the terrorists”). We deem those non-credible, absent specific evidence to the contrary.

We Remove:

We remove threats of violence against various targets. Threats of violence are statements or visuals representing an intention, aspiration, or call for violence against a target, and threats can be expressed in various types of statements such as statements of intent, calls for action, advocacy, expressions of hope, aspirational statements and conditional statements.

We do not prohibit threats when shared in awareness-raising or condemning context, when less severe threats are made in the context of contact sports, or certain threats against violent actors, like terrorist groups.

Universal protections for everyone
Everyone is protected from the following threats:

  • Threats of violence that could lead to death (or other forms of high-severity violence)
  • Threats of violence that could lead to serious injury (mid-severity violence). We remove such threats against public figures and groups not based on protected characteristics when credible, and we remove them against any other targets (including groups based on protected characteristics) regardless of credibility
  • Admissions to high-severity or mid-severity violence (in written or verbal form, or visually depicted by the perpetrator or an associate), except when shared in a context of redemption, self-defense, contact sports (mid-severity or less), or when committed by law enforcement, military or state security personnel
  • Threats or depictions of kidnappings or abductions, unless it is clear that the content is being shared by a victim or their family as a plea for help, or shared for informational, condemnation or awareness-raising purposes

Additional protections for Private Adults, All Children, high-risk persons and persons or groups based on their protected characteristics:
In addition to the universal protections for everyone, all private adults (when self-reported), children and persons or groups of people targeted on the basis of their protected characteristic(s), are protected from threats of low-severity violence.

Other Violence
In addition to all of the protections listed above, we remove the following:

  • Content that asks for, offers, or admits to offering services of high-severity violence (for example, hitmen, mercenaries, assassins, female genital mutilation) or advocates for the use of these services
  • Instructions on how to make or use weapons where there is language explicitly stating the goal to seriously injure or kill people, or imagery that shows or simulates the end result, unless with context that the content is for a non-violent purpose such as educational self-defense (for example, combat training, martial arts) and military training
  • Instructions on how to make or use explosives, unless with context that the content is for a non-violent purpose such as recreational uses (for example, fireworks and commercial video games, fishing)
  • Threats to take up weapons or to bring weapons to a location or forcibly enter a location (including but not limited to places of worship, educational facilities, polling places or locations used to count votes or administer an election), or locations where there are temporary signals of a heightened risk of violence.
  • Threats of violence related to voting, voter registration, or the administration or outcome of an election, even if there is no target.
  • Glorification of gender-based violence that is either intimate partner violence or honor-based violence

For the following Community Standards, we require additional information and/or context to enforce:

We Remove:

  • Threats against law enforcement officers or election officials, regardless of their public figure status or credibility of the threat.

  • Coded statements where the method of violence is not clearly articulated, but the threat is veiled or implicit, as shown by the combination of both a threat signal and contextual signal from the list below.

  • Threat: a coded statement that is one of the following:

  • Shared in a retaliatory context (e.g., expressions of desire to engage in violence against others in response to a grievance or threat that may be real, perceived or anticipated)

  • References to historical or fictional incidents of violence (e.g., content that threatens others by referring to known historical incidents of violence that have been committed throughout history or in fictional settings)

  • Acts as a threatening call to action (e.g., content inviting or encouraging others to carry out violent acts or to join in carrying out the violent acts)

  • Indicates knowledge of or shares sensitive information that could expose others to violence (e.g., content that either makes note of or implies awareness of personal information that might make a threat of violence more credible. This includes implying knowledge of a person's residential address, their place of employment or education, daily commute routes or current location)

  • Context

  • Local context or expertise confirms that the statement in question could lead to imminent violence.

  • The target of the content or an authorized representative reports the content to us.

  • The target is a child.

  • Implicit threats to bring armaments to locations, including but not limited to places of worship, educational facilities, polling places or locations used to count votes or administer an election (or encouraging others to do the same) or locations where there are temporary signals of a heightened risk of violence.

  • Claims or speculation about election-related corruption, irregularities, or bias when combined with a signal that content is threatening violence (e.g., threats to take up or bring a weapon, visual depictions of a weapon, references to arson, theft, vandalism), including:

  • Targeting individual(s)

  • Targeting a specific location (state or smaller)

  • Where the target is not explicit

  • References to election-related gatherings or events when combined with a signal that content is threatening violence (e.g., threats to take up or bring a weapon, visual depictions of a weapon, references to arson, theft, vandalism).

  • Threats of high- or mid-severity violence in the defense of self or another human when the criteria below are met.

  • Against a person (excluding persons identifiable by name or face, people targeted based on their protected characteristics, and children)

  • In the context of home entry or interpersonal violence that is proportional to the violence responded to and is an immediate threat

  • The potential impact on voice outweighs the risk of imminent violence

Read lessRead more

Dangerous Organizations and Individuals

Policy Rationale

In an effort to prevent and disrupt real-world harm, we do not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on our platforms. We assess these entities based on their behavior both online and offline, most significantly, their ties to violence. Under this policy, we designate individuals, organizations, and networks of people. These designations are divided into two tiers that indicate the level of content enforcement, with Tier 1 resulting in the most extensive enforcement because we believe these entities have the most direct ties to offline harm.

Tier 1 focuses on entities that engage in serious offline harms - including organizing or advocating for violence against civilians, repeatedly dehumanizing or advocating for harm against people based on protected characteristics, or engaging in systematic criminal operations. Tier 1 includes hate organizations; criminal organizations, including those designated by the United States government as Specially Designated Narcotics Trafficking Kingpins (SDNTKs); and terrorist organizations, including entities and individuals designated by the United States government as Foreign Terrorist Organizations (FTOs) or Specially Designated Global Terrorists (SDGTs). We remove Glorification, Support, and Representation of Tier 1 entities, their leaders, founders or prominent members, as well as unclear references to them.

In addition, we do not allow content that glorifies, supports, or represents events that Meta designates as violating violent events - including terrorist attacks, hate events, multiple-victim violence or attempted multiple-victim violence, serial murders, or hate crimes. Nor do we allow (1) Glorification, Support, or Representation of the perpetrator(s) of such attacks; (2) perpetrator-generated content relating to such attacks; or (3) third-party imagery depicting the moment of such attacks on visible victims. We also remove content that Glorifies, Supports or Represents ideologies that promote hate, such as nazism and white supremacy.We remove unclear references to these designated events or ideologies.

Tier 2 includes Violent Non-State Actors that engage in violence against state or military actors in an armed conflict but do not intentionally target civilians. It also includes Violence Inducing Entities that are engaged in preparing or advocating for future violence but have not necessarily engaged in violence to date. These are also entities that may repeatedly engage in violations of our Hate Speech or Dangerous Organizations and Individuals policies on or off the platform. We remove Glorification, Material Support, and Representation of these entities, their leaders, founders or prominent members.

We recognize that users may share content that includes references to designated dangerous organizations and individuals in the context of social and political discourse. This includes content reporting on, neutrally discussing or condemning dangerous organizations and individuals or their activities.

News reporting includes information that is shared to raise awareness about local and global events in which designated dangerous organizations and individuals are involved.

  • E.g. “Breaking News: Al-Shabab claimed responsibility for the attack in Somalia”
  • E.g. “Timeline and expert analysis: How the shooting at the Buffalo Supermarket unfolded and what did the perpetrator say in court”

Neutral discussion includes factual statements, commentary, questions, and other information that do not express positive judgment around the designated dangerous organization or individual and their behavior.

  • E.g. “Al Qaeda represents less threat than ISIS given the lack of leadership and finance”
  • E.g. “Anders Breivik is one example of how complex the radicalization process can be”

Condemnation includes disapproval, disgust, rejection, criticism, mockery, and other negative expressions about a designated dangerous organization or individual and their behavior.

  • E.g. “I feel disgusted by the crime of Salvador Ramos. The judge’s words resonated so much to me. He should get no mercy by the court”
  • E.g. “Hitler’s crimes shall never be forgotten ever. These were some of the darkest moments in history”

Our policies are designed to allow room for these types of discussions while simultaneously limiting risks of potential offline harm. We thus require people to clearly indicate their intent when creating or sharing such content. If a user's intention is ambiguous or unclear, we default to removing content.

In line with international human rights law, our policies allow discussions about the human rights of designated individuals or members of designated dangerous entities, unless the content includes other glorification, support, or representation of designated entities or other policy violations, such as incitement to violence.

Please see our Corporate Human Rights Policy for more information about our commitment to internationally recognized human rights.

We Remove:

We remove Glorification, Support and Representation of various dangerous organizations and individuals. These concepts apply to the organizations themselves, their activities, and their members. These concepts do not proscribe peaceful advocacy for particular political outcomes.

Glorification, defined as any of the below:

  • Legitimizing or defending the violent or hateful acts of a designated entity by claiming that those acts have a moral, political, logical or other justification that makes them acceptable or reasonable.
    • E.g. "Hitler did nothing wrong."
  • Characterizing or celebrating the violence or hate of a designated entity as an achievement or accomplishment;
    • E.g. “Hizbul Mujahideen is winning the war for a free and independent Kashmir”
  • An aspirational statement of membership or statement that you would like to be a designated entity or the perpetrator of a violating violent event.
    • E.g. “I wish I can join ISIS and be part of the Khilafah”

We remove Glorification of Tier 1 and Tier 2 entities as well as designated events.

For Tier 1 and designated events, we may also remove unclear or contextless references if the user’s intent was not clearly indicated. This includes unclear humor, captionless or positive references that do not glorify the designated entity’s violence or hate.

Support, defined as any of the below:

  • Material Support

  • Any act which improves the financial status of a designated entity – including funneling money towards or away from a designated entity;

    • E.g., “Donate to the KKK!”
  • Any act which provides material aid to a designated entity or event;

    • E.g., “If you want to send care packages to the Sinaloa Cartel, use this address:”
  • Recruiting on behalf of a designated entity or event;

    • E.g., “If you want to fight for the Caliphate, DM me”
  • Other Support

  • Channeling information or resources, including official communications, on behalf of a designated entity or event

  • E.g., Directly quoting a designated entity without caption that condemns, neutrally discusses, or is a part of news reporting.

  • Putting out a call to action on behalf of a designated entity or event;

  • E.g. "Contact the Atomwaffen Division – (XXX) XXX-XXXX"

We remove all Support of Tier 1 and Material Support of Tier 2.

Representation, defined as any of the below:

  • Stating that you are a member of a designated entity, or are a designated entity;
    • E.g., “I am a grand dragon of the KKK.”
  • Creating a Page, Profile, Event, Group, or other Facebook entity that is or purports to be owned by a Designated Entity or run on their behalf, or is or purports to be a designated event.
    • E.g., A Page named “American Nazi Party.”

We remove Representation of Tier 1 and 2 Designated Organizations and designated events.

Types and Tiers of Dangerous Organizations

Tier 1: Terrorism, organized hate, large-scale criminal activity, attempted multiple-victim violence, multiple victim violence, serial murders, and violating violent events

We do not allow individuals or organizations involved in organized crime, including those designated by the United States government as specially designated narcotics trafficking kingpins (SDNTKs); hate; or terrorism, including entities designated by the United States government as Foreign Terrorist Organizations (FTOs) or Specially Designated Global Terrorists (SDGTs), to have a presence on the platform. We also don't allow other people to represent these entities. We do not allow leaders or prominent members of these organizations to have a presence on the platform, symbols that represent them to be used on the platform, or content that glorifies them or their acts, including unclear references to them. In addition, we remove any support for these individuals and organizations.

We do not allow content that glorifies, supports, or represents events that Meta designates as terrorist attacks, hate events, multiple-victim violence or attempted multiple-victim violence, serial murders, hate crimes or violating violent events. Nor do we allow (1) content that glorifies, supports, or represents the perpetrator(s) of such attacks; (2) perpetrator-generated content relating to such attacks; or (3) third-party imagery depicting the moment of such attacks on visible victims.

We also do not allow Glorification, Support, or Representation of designated hateful ideologies, as well as unclear references to them.

Terrorist organizations and individuals, defined as a non-state actor that:

  • Engages in, advocates, or lends substantial support to purposive and planned acts of violence,
  • Which causes or attempts to cause death, injury or serious harm to civilians, or any other person not taking direct part in the hostilities in a situation of armed conflict, and/or significant damage to property linked to death, serious injury or serious harm to civilians
  • With the intent to coerce, intimidate and/or influence a civilian population, government, or international organization
  • In order to achieve a political, religious, or ideological aim.

Hate Entity, defined as an organization or individual that spreads and encourages hate against others based on their protected characteristics. The entity’s activities are characterized by at least some of the following behaviors:

  • Violence, threatening rhetoric, or dangerous forms of harassment targeting people based on their protected characteristics;
  • Repeated use of hate speech;
  • Representation of Hate Ideologies or other designated Hate Entities, and/or
  • Glorification or Support of other designated Hate Entities or Hate Ideologies.

Criminal Organizations, defined as an association of three or more people that:

  • is united under a name, color(s), hand gesture(s) or recognized indicia; and
  • has engaged in or threatens to engage in criminal activity such as homicide, drug trafficking, or kidnapping.

Multiple-Victim Violence and Serial Murders

  • We consider an event to be multiple-victim violence or attempted multiple-victim violence if it results in three or more casualties in one incident, defined as deaths or serious injuries. Any Individual who has committed such an attack is considered to be a perpetrator or an attempted perpetrator of multiple-victim violence.
  • We consider any individual who has committed two or more murders over multiple incidents or locations a serial murderer.

Hateful Ideologies

  • While our designations of organizations and individuals focus on behavior, we also recognize that there are certain ideologies and beliefs that are inherently tied to violence and attempts to organize people around calls for violence or exclusion of others based on their protected characteristics. In these cases, we designate the ideology itself and remove content that supports this ideology from our platform. These ideologies include:
    • Nazism
    • White Supremacy
    • White Nationalism
    • White Separatism
  • We remove explicit Glorification, Support, and Representation of these ideologies, and remove individuals and organizations that ascribe to one or more of these hateful ideologies.

Tier 2: Violent Non-State Actors and Violence Inducing Entities

Organizations and individuals designated by Meta as Violent Non-state Actors or Violence Inducing Entities are not allowed to have a presence on our platforms, or have a presence maintained by others on their behalf. As these communities are actively engaged in violence, against state or military actors in armed conflicts (Violent Non-State Actors) or are preparing or advocating or creating conditions for future violence (Violence Inducing Entities), material support of these entities is not allowed. We will also remove glorification of these entities.

Violent Non-State Actors, defined as any non-state actor that:

  • Engages in a pattern of purposive and planned acts of high-severity violence targeting government, military or other armed groups taking direct part in the hostilities in a situation of armed conflict, and does not intentionally and explicitly target civilians with high-severity violence; AND/OR
  • Deprives communities of access to critical infrastructure or natural resources; AND/OR
  • Engages in a pattern of attacks intended to bring significant damage to infrastructure that is not linked to death, serious injury or serious harm to civilians.

Violence Inducing Entities are defined as follows:

A Violence Inducing Entity (General) is a non-state actor that:

  • Uses weapons as a part of their training, communication, or presence; and are structured or operate as unofficial military or security forces; AND

  • Coordinates in preparation for inter-community violence or civil war; OR

  • Advocates for violence against government officials or violent disruptions of civic events; OR

  • Engages in theft, vandalism, burglary or other damage to property; OR

  • Engages in mid-severity violence at civic events; OR

  • Promotes bringing weapons to a location when the stated intent is to intimidate people amid a protest

A Violence Inducing Conspiracy Network is a non-state actor that:

  • Is identified by a name, mission statement, symbol, or shared lexicon; AND
  • Promotes unfounded theories that attempts to explain the ultimate causes of significant social and political problems, events and circumstances with claims of secret plots by two or more powerful actors; AND
  • Has explicitly advocated for or has been directly linked to a pattern of offline physical harm by adherents motivated by the desire to draw attention to or redress the supposed harms identified in the unfounded theories promoted by the network.

A Hate Banned Entity is a non-state actor that:

  • Engages in repeated hateful conduct or rhetoric, but does not rise to the level of a Tier 1 entity because they have not engaged in or explicitly advocated for violence, or because they lack sufficient connections to previously designated organizations or figures.

For the following Community Standards, we require additional information and/or context to enforce:

  • In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.

Read lessRead more

Coordinating Harm and Promoting Crime

Policy Rationale

In an effort to prevent and disrupt offline harm and copycat behavior, we prohibit people from facilitating, organizing, promoting or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals. We allow people to debate and advocate for the legality of criminal and harmful activities, as well as draw attention to harmful or criminal activity that they may witness or experience as long as they do not advocate for or coordinate harm.

We Remove:

Harm against people

  • Outing: exposing the identity or locations affiliated with anyone who is alleged to:

  • Be a member of an outing-risk group; and/or

  • Share familial and/or romantic relationships with a member(s) of an outing-risk group; and/or

  • Have performed professional activities in support of an outing-risk group (except for political figures)

  • Outing the undercover status of law enforcement, military, or security personnel if the content contains the agent’s name, their face or badge and any of the following:

  • The agent’s law enforcement organization

  • The agent’s law enforcement operation

  • Explicit mentions of their undercover status

  • Coordinating, threatening, supporting, or admitting to swatting except in the context of awareness raising or condemnation, fictional or staged settings or redemption.

  • Depicting, promoting, advocating for or encouraging participation in a high-risk viral challenge except in the context of awareness raising or condemnation. Where imagery is depicted in these contexts, we include a label so that people are aware that the content may be sensitive.

Harm against animals

  • Coordinating, threatening, supporting or admitting to acts of physical harm against animals (in written, visual or verbal form) except in cases of:

  • Awareness-raising or condemnation

  • Redemption

  • Survival or defense of self, another human or another animal

  • Fictional or staged settings EXCEPT where it depicts staged animal fights or fake animal rescues

  • Hunting or fishing

  • Religious sacrifice

  • Food preparation or processing

  • Pests or vermin

  • Mercy killing

  • Bullfighting

  • Coordinating, threatening, supporting, depicting or admitting to staged animal fights or depicting video imagery of fake animal rescues except in the context of awareness raising or condemnation or redemption.

Harm against property

  • Coordinating, threatening, supporting or admitting to vandalism, theft or malicious hacking (in written, visual or verbal form) , except in the context of

  • Awareness raising or condemnation,

  • Redemption,

  • Fictional or staged settings,

  • Admitting in the context of defense of self, or another human

  • depicting vandalism in protest context,

  • depicting graffiti, or

  • speaking positively about vandalism and theft committed by others.

Voter and/or census fraud

  • Offers to buy or sell votes with cash, gifts, services or other material goods, except if shared in condemning, awareness raising, news reporting, or humorous or satirical contexts.
  • Advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process, except if shared in condemning, awareness raising, news reporting, or humorous or satirical contexts.

For the following content, we include a label so that people are aware the content may be sensitive:

  • Imagery depicting a high-risk viral challenge if shared condemning or raising awareness of the associated risks.

For the following Community Standards, we require additional information and/or context to enforce:

We Remove:

  • Outing: exposing the identity of a person and putting them at risk of harm:

  • LGBTQIA+ members

  • Unveiled women

  • Non-convicted individuals as predators in the context of a sexual predator Sting Operation

  • Individuals involved in legal cases, when their involvement is restricted from public disclosure

  • Witnesses, informants , activists, detained persons or hostages

  • Defectors, when reported by credible government channel

  • Prisoners of war, in the context of an armed conflict

  • Imagery that is likely to deceive the public as to its origin if:

  • The entity depicted or an authorized representative objects to the imagery, and

  • The imagery has the potential to cause harm to members of the public.

  • Statement of intent, call to action, or encouragement to either:

  • Block access to essential services when there is confirmation or publicly available confirmation that emergency vehicles are blocked, OR

  • Target an individual or specific group of people by blocking their access to essential services or unobstructed passage in a way that may threaten their safety

  • Voter or census interference, including:

  • Calls for coordinated interference that would affect an individual’s ability to participate in an official election or census.

  • Claims that voting or census participation may or will result in law enforcement consequences (for example, arrest, deportation or imprisonment).

  • Threats to go to an election site to monitor or watch voters or election officials’ activities if combined with a reference to intimidation (e.g., “Let’s show them who's boss!,”, “They want a war? We’ll give them a war.”).

  • Threats to go to a post-election activity site if combined with a reference to intimidation (e.g., “Let’s show them who's boss!,”, “They want a war? We’ll give them a war.”).

Read lessRead more

Restricted Goods and Services

Policy Rationale

To encourage safety and deter potentially harmful activities, we prohibit attempts by individuals, manufacturers, and retailers to purchase, sell, raffle, gift, transfer or trade certain goods and services on our platform. We do not tolerate the exchange or sale of any drugs that may result in substance abuse covered under our policies below. Brick-and-mortar and online retailers may promote firearms, alcohol, and tobacco items available for sale off of our services; however, we restrict visibility of this content for minors. We allow discussions about the sale of these goods in stores or by online retailers, advocating for changes to regulations of goods and services covered in this policy, and advocating for or concerning the use of pharmaceutical drugs in the context of medical treatment, including discussion of physical or mental side effects.

Restricted Goods and Services consist of the following categories:

  • Drugs and Pharmaceuticals
  • Weapons, Ammunitions, or Explosives
  • Tobacco and Related Products
  • Alcohol
  • Health and Wellness
  • Online Gambling and Games
  • Endangered and protected species (wildlife and plants)
  • Historic Artifacts
  • Hazardous Goods and Materials
  • Body Parts and Fluids

Each category is detailed below.

Drugs and Pharmaceuticals

We do not allow:

High-risk drugs (drugs that have a high potential for misuse, addiction, or are associated with serious health risks, including overdose; e.g., cocaine, fentanyl, heroin).

Content that:

  • Attempts to buy, sell, trade, co-ordinate the trade of, donate, gift or ask for high-risk drugs.
  • Admits to buying, trading or co-ordinating the trade of high-risk drugs by the poster of the content by themselves or through others.
  • Admits to personal use without acknowledgment of or reference to recovery, treatment, or other assistance to combat usage. This content may not speak positively about, encourage use of, coordinate or provide instructions to make or use high-risk drugs.
  • Coordinates or promotes (by which we mean speaks positively about, encourages the use of, or provides instructions to use or make) high-risk drugs.

Non-medical drugs (drugs or substances that are not being used for an intended medical purposes or are used to achieve a high - this includes precursor chemicals or substances that are used for the production of these drugs.)

Content that:

  • Attempts to buy, sell, trade, co-ordinate the trade of, donate, gift or asks for non-medical drugs.
  • Admits to buying, trading or co-ordinating the trade of non-medical drugs by the poster of the content by themselves or through others.
  • Admits to personal use without acknowledgment of or reference to recovery, treatment, or other assistance to combat usage. This content may not speak positively about, encourage use of, coordinate or provide instructions to make or use non-medical drugs.
  • Coordinates or promotes (by which we mean speaks positively about, encourages the use of, or provides instructions to use or make) non-medical drugs.

Prescription drugs (drugs that require a prescription or medical professionals to administer)

Content that:

  • Attempts to buy, sell or trade prescription drugs except when:
    • Listing the price of vaccines in an explicit education or discussion context.
    • Offering delivery when posted by legitimate healthcare e-commerce businesses.
  • Attempts to donate or gift prescription drugs, except in the event of an economic, health, societal or natural disaster crisis.
  • Asks for prescription drugs, except when content discusses the affordability, accessibility or efficacy of prescription drugs in a medical context

Entheogens

  • Content that attempts to buy, sell, trade, donate or gift or asks for entheogens.
  • Note: Debating or advocating for the legality or discussing scientific or medical merits of entheogens is allowed.

Cannabis and Cannabis Derived Products

  • Content that attempts to buy, sell, trade, donate or gift or asks for marijuana and products containing THC or related psychoactive components.

For the following content, we restrict visibility to adults 18 years of age and older:

Entheogens

  • Content that shows admission to personal use of, coordinates or promotes (by which we mean speaks positively about), or encourages the use of entheogens.
    • Except when any of the above occurs in a fictional or documentary context.

Cannabis and Cannabis Derived Products

  • Content that coordinates or promotes (by which we mean speaks positively about, encourages the use of, or provides instructions to use or make) marijuana and products containing THC or related psychoactive components.
  • Content that attempts to buy, sell, trade, donate, gift or ask for ingestible cannabidiol (CBD) or similar cannabinoid products.

Weapons, Ammunitions, or Explosives

We do not allow:

Content that:

  • Attempts to buy, sell, or trade, firearms, firearm parts, ammunition, explosives, or lethal enhancements except when:

    • posted by a Page, Group or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, websites, brands or government agencies (e.g. police department, fire department) or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
  • Attempts to donate or gift firearms, firearm parts, ammunition, explosives, or lethal enhancements except when posted in the following contexts:

    • Donating, trading in or buying back firearms and ammunition by a Page, Group or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, websites, brands or government agencies, or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
    • An auction or raffle of firearms by legitimate brick-and-mortar entities, including retail businesses, government-affiliated organizations or non-profits, or private individuals affiliated with or sponsored by legitimate brick-and-mortar entities.
  • Asks for firearms, firearm parts, ammunition, explosives, or lethal enhancements

  • Sells, gifts, exchanges, transfers, coordinates, promotes (by which we mean speaks positively about, encourages the use of) or provides access to 3D printing or computer-aided manufacturing instructions for firearms or firearms parts regardless of context or poster.

  • Attempts to buy, sell, or trade machine gun conversion devices

For the following content, we restrict visibility to adults 21 years of age and older:

Weapons, Ammunitions, or Explosives

  • Content posted by or promoting legitimate brick-and-mortar store, entities, including retail businesses websites, brands, or government agencies which attempt to buy, sell, trade, donate or gift (including in the context of an auction or a raffle) firearms, firearm parts, ammunition, explosives, or lethal enhancements.

For the following content, we restrict visibility to adults 18 years of age and older:

Bladed Items:

  • Content that attempts to buy, sell, trade, coordinate, donate, gift or asks for: bladed items and any other weapons (e.g., pepper spray or knuckle rings).

Tobacco and Related Products

We do not allow:

Content that:

  • Attempts to buy, sell or trade tobacco/nicotine related products, or products that simulate smoking, including all kinds of “ENDS” Electronic Nicotine Delivery Systems Products (e.g., electronic cigarettes, vapes, and nicotine free-vapes).

    • Except when posted by a Page, Group, or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, websites, brands, or a private individual sharing content on behalf of legitimate brick-and-mortar entities, including offering delivery services and brand giveaways.
  • Attempts to donate or gift tobacco/nicotine products, or “ENDS” products.

    • Except when posted by a Page, Group, or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, websites, brands, or a private individual sharing content on behalf of legitimate brick-and-mortar entities, including offering delivery services and brand giveaways.
  • Asks for tobacco/nicotine products, or products that simulate smoking, including all kinds of “ENDS” products (nicotine-free vapes).

For the following content, we restrict visibility to adults 18 years of age and older:

  • Content posted by or promoting legitimate brick-and-mortar entities, including retail businesses websites or brands, which attempt to buy, sell, trade, donate or gift of alcohol or tobacco products.
  • Content depicting the consumption of tobacco, nicotine products, or “ENDS” products.
  • Content that coordinates or promotes the use of tobacco, nicotine products, “ENDS” products, or tobacco brands.

Alcohol

We do not allow:

Content that:

  • Attempts to buy, sell or trade alcohol except when:

    • Posted by a Page, Group, or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, websites or brands, or a private individual sharing content on behalf of legitimate brick-and-mortar entities, including offering delivery services and brand giveaways
    • Content refers to alcohol or offering an invitation to an alcohol venue where alcohol will be exchanged or consumed on location at an event, restaurant, bar, or party
  • Attempts to donate or gift alcohol or tobacco except when posted by a Page, Group, or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, websites or brands, or a private individual sharing content on behalf of legitimate brick-and-mortar entities

  • Asks for alcohol products and beverages

For the following content, we restrict visibility to adults 18 years of age and older:

  • Content posted by or promoting legitimate brick-and-mortar entities, including retail businesses websites or brands, which attempt to buy, sell, trade, donate or gift alcohol products or beverages
  • Content depicting the consumption of alcohol products or beverages or sharing recipes for alcoholic beverages
  • Content referring to alcohol products or offering an invitation to an alcohol venue where alcohol will be exchanged or consumed

Health and Wellness

For the following content, we restrict visibility to adults 18 years of age and older:

Weight loss products or services

Content that:

  • Attempts to buy, sell, trade, donate, gift, mention, ask for, weight loss products or services
  • Admits to or depicts using a weight loss product, in a favorable context or discusses its side effects
  • Shows coordination or promotion (by which we mean speaks positively, encourages the use of or provides instructions to use or make) a diet product
  • Depict a before and after body-change comparison in the context of weight loss, showcasing weight loss after using a product in a manner that may make people feel bad about their appearance or imply negative self-perception.

Cosmetic Products, Procedures, or Surgeries

Content that:

  • Attempts to buy, sell, trade, donate, gift, mention, or ask for cosmetic products, procedures, or surgeries. This includes:

    • Skin Whitening products such as; bleaching creams
    • Cosmetic procedures with the intention to treat, or restore function or structure of people’s faces or bodies.
  • Admits to or depicts using a cosmetic procedure or surgery, highlighting its positive or negative impact, or side effects

  • Shows coordination or promotion (by which we mean speaks positively, encourages the use of or provides instructions to use or perform) of a cosmetic procedure or surgery

  • Depict the before and after transformation of skin conditions after the usage of a cosmetic product, procedure, or surgery in a manner that may make people feel bad about their appearance or imply negative self-perception

Note: Fitness services such as Pilates, and temporary cosmetics such as makeup are not covered by this policy.

Adult sexual arousal products

Content that:

  • attempts to buy, sell, promote, trade, donate, gift or ask for adult sexual arousal products, that can stimulate a person’s sexual pleasure or increase a person’s sexual arousal this includes:
    • Sex toys
    • Erotic products
    • Non-surgical genital enhancement products, such as products that stimulate sexual desire or improve sexual performance
    • Products where the primary focus is to stimulate sexual desire or arousal

Online Gambling and Games

For the following content, we restrict visibility to adults 18 years of age and older:

Online Gambling and Games

  • Content that attempts to sell, trade, depict or promote online gaming and gambling services where anything of monetary value (including cash or digital/virtual currencies, e.g., bitcoin) is required to play and anything of monetary value forms part of the prize. This includes but is not limited to:
    • Games of skill, lotteries/raffles, betting, sports betting, casino games, games of chance, or sweepstakes/prize draws
    • Gambling Games offering a limited trial period and requiring payment thereafter

Social Casino Games

  • Content that attempts to sell, trade, depict or promote social casino games that simulate gambling games such as slot machines, where there is no opportunity to win money or money’s worth. This includes content that indicates the opportunity to win “coins” of no monetary value.

Endangered and protected species (wildlife and plants)

We do not allow:

Content that:

  • Attempts to buy, sell, trade, donate, or gift or asks for endangered species or their parts or protected plants species.
  • Admits to or encourages the poaching, buying or trading of endangered species or their parts committed by the poster of the content either by themselves or their associates through others. This does not include depictions of poaching by strangers.
  • Depicts poaching of endangered species or their parts committed by the poster of the content by themselves or their associates.
  • Shows coordination or promotion (by which we mean speaks positively about, encourages the poaching of, or provides instructions) to use or make products from endangered species or their parts, or any endangered wildlife or plants.

Live non-endangered animals excluding livestock

  • Content that attempts to buy, sell or trade live non-endangered animals except when:
    • Posted by a Page, Group or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, legitimate websites, brands, or rehoming shelters, or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
    • Posted in the context of donating or rehoming live non-endangered animals, including rehoming fees for peer-to-peer adoptions
    • Selling an animal for a religious offering, or offering a reward for lost pets.

Historic Artifacts

We do not allow:

Content that attempts to buy, sell, trade, donate or gift or asks for historical artifacts.

Hazardous Goods and Materials

We do not allow:

Content that attempts to buy, sell, trade, donate or gift or asks for hazardous goods and materials

Body Parts and Fluids

We do not allow:

Content that:

  • Attempts to buy, sell or trade human body parts, even beyond the human-trafficking content prohibited under the Human Exploitation policy.
  • Attempts to buy, sell or trade human fluids, except for a donation

Read lessRead more

Fraud, Scams, and Deceptive Practices

Policy Rationale

We aim to protect users and businesses from being deceived out of their money, property or personal information. We achieve this by removing content and combatting behavior that purposefully employs deceptive means - such as wilful misrepresentation, stolen information and exaggerated claims - to either scam or defraud users and businesses, or to drive engagement. This includes content that seeks to coordinate or promote those activities using our services. We allow people to raise awareness and educate others as well as condemn these activities.

We do not allow:

Content that attempts to scam or defraud users and/or businesses by means of:

Loan Fraud and Scams

Content that:

  • Offers loans requiring the user to pay an advance fee to obtain a loan.
  • Offers loans with guarantee or near-guarantee of approval, either explicitly stated or implicitly understood based on context (such as claims to approve loan without asking for financial information).
  • Note: We also look for other signals to determine if an entity is posting legitimate, non-fraudulent content, such as when it is a verified entity and a bank or financial institution.

Gambling Fraud and Scams

Content that:

  • Offers real money gambling services (“Real money” is real-world currency that can be used to buy goods or services in the real world, including national currencies such as U.S. Dollars and virtual currencies such as Bitcoin):
    • with a guarantee of winning.
    • implying or admitting to have rigged the outcome of a game or match.
    • soliciting people to enable match fixing or looking for help or tips on how to fix a match or game.

Social casino games that simulate gambling with no opportunity to win real money fall under our Community Standard for Restricted Goods and Services.

Investment or Financial Fraud and Scams

  • Investment Opportunities. Content that:

    • Offers investment opportunities where returns on investment are guaranteed or risk-free.
    • Offers investment opportunities where returns on investment or compensation is partly or fully based on recruitment of others to participate in the scheme.
    • Offers investment opportunities where the opportunity is of a “get-rich-quick” nature and/or claims that a small investment can be turned into a large amount.
  • Money/Cash Flip. Content that:

    • Offers to turn a certain sum of money into a larger one through flipping or trick or strategy involving explicit mentions of ”cash flip,” "money flip,” or similar terminology.

Money Muling and Laundering Fraud and Scams

  • Money Muling. Content that:

    • Offers or asks for money muling (causing victims to be unknowing participants in money laundering by offering money or share of profits in exchange for allowing others to use their bank accounts or transferring money on behalf of others).
    • Offers or asks for money muling by offering employment to accept and transfer money to third parties using the victim’s bank account.
  • Money Laundering. Content that:

    • Requests, solicits, or offers to facilitate money laundering, which is an attempt to make illegally obtained money appear legitimate by disguising the origin of the money through a complex sequence of financial transactions, including through any of the following means:
      • Seeking transfer of funds through SWIFT (Society for Worldwide Interbank Financial Telecommunications) or similar methods,
      • Seeking or offering details on types of bank accounts available to support receipt or transfer of cash.

Inauthentic Identity Fraud and Scams

Content that:

  • Attempts to scam or defraud users by misrepresenting the identity of the poster or nature of a request:
    • Charity Fraud and Scam, which are fraudulent requests for money or donations for charitable causes together with claims that the donation is urgent and includes information, such as bank accounts, where money can be sent.
    • Romance Fraud and Scam, which are fraudulent attempts to establish online romantic relationships by seeking non-sexual companionship or relationship and offering or asking for money or its equivalent in exchange.
    • Established Business/Entity Fraud and Scams, which involve falsely claiming to represent, or speak in the voice of, an established business or entity, in an attempt to scam or defraud.

Product or Reward Fraud and Scams

  • Government Grant Fraud and Scam. Content that:

    • Falsely offers money from government grants or any other governmental source of funding. We consider various signals to determine if an entity is posting legitimate, non-fraudulent content, such as when it comes from a verified entity.
  • Tangible, Spiritual or Illuminati Fraud and Scam. Content that:

    • Offers tangible rewards, such as money, goods, or services that have a monetary value including physical, digital and virtual currencies, and physical or digital goods and services for membership in or joining an association, cult, religious sect (for example, the Illuminati brotherhood).
    • Offers tangible rewards for using black magic or spells or magical items (for example, spells, lucky charms, amulets, tokens, potions, magic wallet, etc.).
  • Insurance Fraud and Scams. Content that:

    • Offers false, heavily discounted insurance with requests for an up-front fee (admin fee, or deposit, or otherwise).
    • Offers false, heavily discounted insurance with promises of large savings on insurance compared to conventional insurance providers (at least 30% less).
    • Note: We also look for other signals to determine if an entity is posting legitimate, non-fraudulent content, such as when it is a verified entity and a bank or financial institution
  • Job Fraud and Scams. Content that:

    • Offers jobs with an unclear or vague job description and get-rich-quick opportunities promising money with little time investment or effort.
    • Offers jobs containing no job information, simply referencing job vacancies.
    • Offers work from home but the job title implies the employee cannot WFH.
    • Offers jobs with advance promises of salary.
    • Offers guaranteed jobs.
    • Offers jobs with a demand for an advance fee before the job is granted.
    • Note: We also look for other signals to determine if an entity is posting legitimate, non-fraudulent content, such as when it is a verified entity
  • Debt Relief and Credit Repair Fraud and Scam. Content that:

    • Promises to delete or eliminate or reduce debt by a particular amount in a set period of time.
    • Promises to stop or delete all debt collections or lawsuits.
    • Promises to forgive or cancel debt through "new government program” or change in law or equivalent statement.
    • Promises to delete or remove credit information from credit reports or create new "credit identity".
    • Note: We also look for other signals to determine an entity is posting legitimate, non-fraudulent content, such as when it is a verified entity and a bank or financial institution
  • Giveaway Fraud and Scam. Content that:

    • Offers a guaranteed reward of real money in exchange for users needing to:
      • Register at an off-site link.
      • Share Personal Identifiable Information (PII) or Other Personal Information.
      • Contact off-platform or on-platform via private message.
      • Take no action.
  • Advance Fee Fraud and Scam. Content that:

    • Falsely promises money in exchange for an up-front fee/wire transfer/payment.

Fake Documents Fraud and Scams

  • Fake or Forged Documents. Content that:

    • Offers solicitation, creation, sale, purchase or trade of fake or forged documents.
    • Offers sale of visas or green cards.
    • Guarantees visa or green card approval.
    • Enables users to get visa approvals without fulfilling normal requirements.
  • Fake or Counterfeit Currency. Content that:

    • Offers sale, purchase or trade of fake or counterfeit currency, except board-game currency (e.g., Monopoly money) if there is clear context that it is for board-game purposes.
  • Fake or Counterfeit Vouchers. Content that:

    • Offers sharing, sale, purchase or trade of fake or counterfeit vouchers.
    • Admits to, promotes, or solicits the use of physical or digital coupons or vouchers to achieve atypical pricing by either:
      • >using coupons or vouchers to purchase items those coupons or vouchers are not intended for; or
      • by using expired coupons or vouchers.
  • Fake or forged educational and professional certificates. Content that:

    • Offers sale, purchase or trade of fake or forged educational and professional certificates.

Stolen Information, Goods or Services Fraud and Scam

  • Carding Fraud and Scam. Content that:

    • Involves buying, selling or trading of stolen credit cards or other financial instruments that can be used for unauthorized purchases (also known as "carding").
  • PII Fraud and Scam. Content that:

    • Offers buying, selling or trading of Personal Identifiable Information (PII) or Other Personal Information except if it belongs to a fictional character.
  • Fake Review Fraud and Scam. Content that:

    • Calls for buying, selling or trading of product reviews/ratings.
    • Implicitly or explicitly incentivises users to provide reviews in exchange for discounts, refunds or free items.
  • Subscription Fraud and Scam. Content that:

    • Offers buying, selling or trading of credentials for subscription services (login credentials to online services which require a recurring payment at regular intervals) by making references to a paid online service, either by naming it or by sharing its logo.
    • Note: We also look for other signals to determine if an entity is posting legitimate, non-fraudulent content, such as when it is a verified entity and a bank or financial institution.
  • Cheating Fraud and Scam. Content that:

    • Involves sharing, selling, trading, or buying of:
      • future exam papers or answer sheets.
      • Products or services that enable cheating in exams.
      • Products or services that enable passing drug tests in an unauthorized manner

Unauthorized Use of Devices Fraud and Scam

  • Device Manipulation Fraud and Scam. Content that

    • Calls for buying, selling, trading or sharing of any manipulated, altered, or fake measurement devices.
    • Admits to, promotes or solicits use of physical manipulation of devices to achieve inaccurate pricing.
  • Digital Content Fraud and Scam. Content that

    • Offers or asks for products that facilitate or encourage access to digital content in an unauthorized manner. These include but are not limited to: augmented set top boxes, fully loaded/KODI installed boxes and KODI services.

Deceptive and Misleading Practices

  • Misleading Health Practices. Content that
    • Promotes false or misleading health claims or guarantees in a weight loss context by employing click-bait tactics, such as the use of sensational language that make exaggerated or extreme claims

For the following content, we limit the ability to view the content to adults, ages 18 and older:

Content that

  • Promises specific weight-loss results in specific time with no qualifying or disclaimer language.

Notwithstanding the above, we do not prohibit content that condemns, raises awareness of or educates others about fraud and scams, without either revealing sensitive information or promoting fraud or scams

For the following Community Standards, we require additional information and/or context to enforce:

  • We may remove content:
    • Involving fraud/scam that have been reported by a trusted entity.
    • Related to bribery or embezzlement.
    • That offers vaccines in an attempt to scam or defraud users.
    • That attempts to establish a fake persona or to pretend to be a famous person in an attempt to scam or defraud.
    • That offers or asks for products or services designed to facilitate the surreptitious viewing or recording of individuals, e.g., spy cams, mobile phone trackers (including those that allow tracing unknown phone numbers), or other hidden surveillance equipment.
    • That offers litigant recruitment opportunities for people to participate in class action lawsuits by impersonating a government entity or a news outlet, by using sensationalist language, or by using exaggerated claims.
    • That offers subscription services that prompt users to enter Personal Information.
    • We do not allow entities to participate in or claim to engage in organized Fraud or Scam behavior, including the use of multiple accounts on our services in concert to perpetrate fraudulent behaviors.

In certain cases, we allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.

Read lessRead more

Suicide, Self-Injury, and Eating Disorders

Policy Rationale

We care deeply about the safety of the people who use our apps. We regularly consult with experts in suicide, self-injury and eating disorders to help inform our policies and enforcement, and we work with organizations around the world to provide assistance to people in distress.

While we do not allow people to intentionally or unintentionally celebrate or promote suicide, self-injury or eating disorders, we do allow people to discuss these topics because we want our services to be a space where people can share their experiences, raise awareness about these issues, and seek support from one another.

We remove any content that encourages suicide, self-injury or eating disorders, including fictional content such as memes or illustrations, and any self-injury content which is graphic, regardless of context. We also remove content that mocks victims or survivors of suicide, self-injury or eating disorders, as well as real time depictions of suicide or self-injury. Content about recovery from suicide, self-injury or eating disorders that is allowed, but may contain imagery that could be upsetting (such as a healed scar) is placed behind a sensitivity screen.

When people post or search for suicide, self-injury or eating disorders related content, we will direct them to local organizations that can provide support and if our Community Operations team is concerned about immediate harm we will contact local emergency services to get them help. For more information, visit the Meta Safety Center.

With respect to live content, experts have told us that if someone is saying they intend to attempt suicide on a livestream, we should leave the content up for as long as possible because the longer someone is talking to a camera, the more opportunity there is for a friend or family member to call emergency services.However, to minimize the risk of others being negatively impacted by viewing this content, we will stop the livestream at the point at which the threat turns into an attempt. As mentioned above, in any case, we will contact emergency services if we identify someone is at immediate risk of harming themselves.

Do not post:

Content that promotes, encourages, coordinates, or provides instructions for suicide, self-injury, or eating disorders.

  • Content that depicts graphic suicide, self-injury, and eating disorder imagery
  • Content depicting a person who engaged in a suicide attempt or death by suicide
  • Content that focuses on depiction of ribs, collar bones, thigh gaps, hips, concave stomach, or protruding spine or scapula when shared together with terms associated with eating disorders
  • Content that contains instructions for drastic and unhealthy weight loss when shared together with terms associated with eating disorders
  • Content that mocks victims or survivors of suicide, self-injury or eating disorders who are either publicly known or implied to have experienced suicide or self-injury
  • Imagery depicting body modification (e.g., tattoo, piercing, scarification, self-flagellation, etc.) when shared in a suicide or self-injury context

For the following content, we include a warning screen so that people are aware the content may be sensitive. We also limit the ability to view the content to adults, ages 18 and older:

  • Photos or videos depicting a person who engaged in euthanasia/assisted suicide in a medical setting.

For the following content, we include a label so that people are aware the content may be sensitive:

  • Content that depicts older instances of self-harm such as healed cuts or other non-graphic self-injury imagery in a self-injury, suicide or recovery context.
  • Content that depicts ribs, collar bones, thigh gaps, hips, concave stomach, or protruding spine or scapula in a recovery context.

We provide resources to people who post written or verbal admissions of engagement in self injury, including:

  • Suicide.
  • Euthanasia/assisted suicide.
  • Self-harm.
  • Eating disorders.
  • Vague, potentially suicidal statements or references (including memes or stock imagery about sad mood or depression) in a suicide or self-injury context.

For the following Community Standards, we require additional information and/or context to enforce:

  • We may remove suicide notes when we have confirmation of a suicide or suicide attempt. We try to identify suicide notes using several factors, including but not limited to:

Read lessRead more

Child Sexual Exploitation, Abuse, and Nudity

Policy Rationale

We do not allow content or activity that sexually exploits or endangers children. When we become aware of apparent child exploitation, we report it to the National Center for Missing and Exploited Children (NCMEC), in compliance with applicable law. We know that sometimes people share nude images of their own children with good intentions; however, we generally remove these images because of the potential for abuse by others and to help avoid the possibility of other people reusing or misappropriating the images.

We also work with external experts, including the Meta Safety Advisory Board, to discuss and improve our policies and enforcement around online safety issues, especially with regard to children. Learn more about the technology we’re using to fight against child exploitation.

Do not post:

Child sexual exploitation

Content, activity, or interactions that threaten, depict, praise, support, provide instructions for, make statements of intent, admit participation in, or share links of the sexual exploitation of children (including real minors, toddlers, or babies, or non-real depictions with a human likeness, such as in art, AI-generated content, fictional characters, dolls, etc). This includes but is not limited to:

  • Sexual intercourse
    • Explicit sexual intercourse or oral sex, defined as mouth or genitals entering or in contact with another person's genitals or anus, when at least one person's genitals or anus is visible.
    • Implied sexual intercourse or oral sex, including when contact is imminent or not directly visible.
    • Stimulation of genitals or anus, including when activity is imminent or not directly visible.
    • Any of the above involving an animal.
  • Children with sexual elements, including but not limited to:
    • Restraints
    • Signs of arousal
    • Focus on genitals or anus
    • Presence of aroused adult
    • Presence of sex toys or use of any object for sexual stimulation, gratification, or sexual abuse
    • Sexualized costume
    • Stripping
    • Staged environment (for example, on a bed) or professionally shot (quality/focus/angles)
    • Open-mouth kissing
    • Stimulation of human nipples or squeezing of female breast (EXCEPT in the context of breastfeeding)
    • Presence of by-products of sexual activity
  • Content involving children in a sexual fetish context
  • Content that supports, promotes, advocates or encourages participation in pedophilia unless it is discussed neutrally in a health context
  • Content that identifies or mocks alleged victims of child sexual exploitation by name or image

Solicitation

Content that solicits sexual content or activity depicting or involving children, defined as:

  • Child Sexual Abuse Material (CSAM)
  • Nude imagery of real or non-real children
  • Sexualized imagery of real or non-real children

Content that solicits sexual encounters with children

Inappropriate interactions with children

Content that constitutes or facilitates inappropriate interactions with children, such as:

  • Arranging or planning sexual encounters with children
  • Enticing children to engage in sexual activity through sexualized conversations or offering, displaying, obtaining or requesting sexual material to or from children, through purposeful exposure or in private messages
  • Engaging in implicitly sexual conversations in private messages with children
  • Obtaining or requesting sexual material from children in private messages

Exploitative intimate imagery and sextortion

Content that attempts to exploit real children by:

  • Coercing money, favors or intimate imagery with threats to expose real or non-real intimate imagery or information
  • Sharing, threatening, or stating an intent to share private sexual conversations or real or non-real intimate imagery

Sexualization of children

  • Content (including photos, videos, real-world art, digital content, and verbal depictions) that sexualizes real or non-real children
  • Groups, Pages, and profiles dedicated to sexualizing real or non-real children

Child nudity

Content that depicts real or non-real child nudity where nudity is defined as:

  • Close-ups of real or non-real children’s genitalia
  • Real or non-real nude toddlers, showing:
    • Visible genitalia, even when covered or obscured by transparent clothing
    • Visible anus and/or fully nude close-up of buttocks
  • Real or non-real nude minors, showing:
    • Visible genitalia (including genitalia obscured only by pubic hair or transparent clothing)
    • Visible anus and/or fully nude close-up of buttocks
    • Uncovered female nipples
    • No clothes from neck to knee - even if no genitalia or female nipples are showing
  • Unless the non-real imagery is for health purposes or is a non-sexual depiction of child nudity in real-word art

Non-sexual child abuse

Videos or photos that depict real or non-real non-sexual child abuse regardless of sharing intent, unless the imagery is from real-world art, cartoons, movies or video games

Content that praises, supports, promotes, advocates for, provides instructions for or encourages participation in non-sexual child abuse

In addition to removing accounts that violate our Child Sexual Exploitation, Abuse and Nudity (CSEAN) policies, our reviewers and automated systems consider a broad spectrum of signals to help prevent potentially unwanted or unsafe interactions.

  • We may restrict access to products and features (e.g. the ability to follow certain accounts) for adults based on their interactions with other accounts, searches for or interactions with violating content, or membership in communities (e.g. Groups) we have removed for violating our policies.

For the following content, we include a warning screen so that people are aware the content may be disturbing and limit the ability to view the content to adults ages eighteen and older:

  • Videos or photos that depict police officers or military personnel committing non-sexual child abuse
  • Videos or photos of non-sexual child abuse, when law enforcement, child protection agencies, or trusted safety partners request that we leave the content on the platform for the express purpose of bringing a child back to safety

For the following content, we include a sensitivity screen so that people are aware the content may be upsetting to some:

  • Videos or photos of violent immersion of a child in water in the context of religious rituals

For the following Community Standards, we require additional information and/or context to enforce:

For the following content, we include a warning label so that people are aware that the content may be sensitive:

  • Imagery posted by a news agency that depicts child nudity in the context of famine, genocide, war crimes, or crimes against humanity, unless accompanied by a violating caption or shared in a violating context, in which case the content is removed

We may remove imagery depicting the aftermath of non-sexual child abuse when reported by news media partners, NGOs, or other trusted safety partners.

We may remove content that identifies alleged victims of child sexual exploitation through means other than name or image if content includes information that is likely to lead to the identification of the individual.

We may remove content created for the purpose of identifying a private minor if there is a risk to the minor’s safety, when requested by Law Enforcement, Government, Trusted Partner, or the content is self-reported by the minor or the minor’s parent/legal guardian

Read lessRead more

Adult Sexual Exploitation

Policy Rationale

We recognize the importance of our services as a place to discuss and draw attention to sexual violence and exploitation. We believe this is an important part of building common understanding and community. In an effort to create space for this conversation and promote a safe environment, we allow survivors to share their experiences, but we remove content that depicts, threatens or promotes sexual violence, sexual assault or sexual exploitation. We also remove content that displays, advocates for or coordinates sexual acts with non-consenting parties to avoid facilitating non-consensual sexual acts. Further, if we become aware of any content that threatens or advocates rape, we may disable the posting account and work with law enforcement, in addition to removing the content.

To protect survivors, we remove images that depict incidents of sexual violence and intimate images shared without the consent of the person(s) pictured. As noted in the introduction, we also work with external safety experts to discuss and improve our policies and enforcement around online safety issues, and we may remove content when we receive information that content is linked to harmful activity. We have written about the technology we use to protect against non-consensual intimate images and the research that has informed our work. We’ve also put together a guide to reporting and removing intimate images shared without your consent.

We do not allow:

Content depicting, advocating for, or mocking non-consensual sexual touching, including:

  • Imagery depicting non-consensual sexual touching (except in real-world art depicting non-real people, with a condemning or neutral caption)
  • Statements attempting or threatening to share, offering, or asking for imagery depicting non-consensual sexual touching
  • Descriptions of non-consensual sexual touching, unless shared by or in support of the survivor
  • Advocacy (including aspirational and conditional statements) for, threats to commit, or admission of participation in non-consensual sexual touching
  • Content mocking survivors or the concept of non-consensual sexual touching
  • Content shared by a third party that identifies survivors of sexual assault when reported by the survivor

Content, activity or interactions that attempts to exploit people by:

  • Coercing money, favors or intimate imagery from people with threats to expose their intimate imagery or intimate information (sextortion)

  • Sharing, threatening, stating an intent to share, offering or asking for non-consensual intimate imagery (NCII) that fulfills all of the three following conditions:

  • Imagery is non-commercial and produced in a private setting.

  • Person in the imagery is (near) nude, engaged in sexual activity or in a sexually suggestive pose.

  • Lack of consent to share the imagery is indicated by meeting any of the signals:

    • Vengeful context (such as, caption, comments or page title).
    • Independent sources such as law enforcement records, media reports (such as, leak of images confirmed by media) or representatives of a survivor of NCII
    • Report from a person depicted in the image or who shares the same name as the person depicted in the image.
  • Promoting, threatening to share, or offering to make non-real non-consensual intimate imagery (NCII) either by applications, services, or instructions, even if there is no (near) nude commercial or non-commercial imagery shared in the content

  • Secretly taking non-commercial imagery of a person's commonly sexualized body parts (breasts, groin, buttocks, or thighs) or of a person engaged in sexual activity. This imagery is commonly known as "creepshots" or "upskirts" and includes photos or videos that mock, sexualize or expose the person depicted in the imagery.

  • Sharing, threatening to share or stating an intent to share private sexual conversations where a lack of consent to share is indicated by by any of the following:

  • Vengeful context and/or threatening context,

  • Independent sources such as media coverage or law enforcement records, or

  • Report from a person depicted in the image or who shares the same name as the person depicted in the image

Content relating to necrophilia or forced stripping, including:

  • Imagery depicting necrophilia or forced stripping (except in real-world art depicting non-real people, with a condemning or neutral caption
  • Statements attempting to share, offer, ask, or threatening to share the imagery of necrophilia or forced stripping
  • Statements that contain descriptions, advocacy for, aspirational or conditional statements about, statements of intent or calls for action to commit, admission of participation in, or mocking of survivors of necrophilia or forced stripping

For the following content, we include a sensitivity screen so that people are aware the content may be upsetting to some:

Narratives and statements that contain a description of non-consensual sexual touching (written or verbal) that includes details beyond mere naming or mentioning the act if:

  • Shared by the survivor, or
  • Shared by a third party (other than the survivor) in support of the survivor or condemnation of the act or for general awareness to be determined by context/caption.

For the following Community Standards, we require additional information and/or context to enforce:

We may restrict visibility to people over the age of 18 and include a warning label on certain content including:

  • Content depicting non-consensual sexual touching when:

  • Shared to raise awareness (without entertainment or sensational context),

  • The survivor is not identifiable, and

  • The content does not involve nudity

  • Content depicting fictional non-consensual sexual touching (movie trailers, etc.) when shared by trusted partners to raise awareness and without sensational context

We may restrict visibility to people over the age of 18 and include a warning label on certain content depicting non-consensual sexual touching, when it is shared to raise awareness and without entertainment or sensational context, where the victim or survivor is not identifiable and where the content does not involve nudity.

In addition to our at-scale policy of removing content that threatens or advocates rape or other non-consensual sexual touching, we may also disable the posting account.

We may also enforce on content shared by a third party that identifies survivors of sexual assault when reported by an authorized representative or Trusted Partner.

Read lessRead more

Bullying and Harassment

Policy Rationale

Bullying and harassment happen in many places and come in many different forms from making threats and releasing personally identifiable information to sending threatening messages and making unwanted malicious contact. We do not tolerate this kind of behavior because it prevents people from feeling safe and respected on Facebook, Instagram, and Threads.

We distinguish between public figures and private individuals because we want to allow discussion, which often includes critical commentary of people who are featured in the news or who have a large public audience. For public figures, we remove attacks that are severe as well as certain attacks where the public figure is directly tagged in the post or comment. We define public figures as state and national level government officials, political candidates for those offices, people with over one million fans or followers on social media and people who receive substantial news coverage.

For private individuals, our protection goes further: We remove content that's meant to degrade or shame, including, for example, claims about someone's sexual activity. We recognize that bullying and harassment can have more of an emotional impact on minors, which is why our policies provide heightened protection for anyone under the age 18, regardless of user status.

Context and intent matter, and we allow people to post and share if it is clear that something was shared in order to condemn or draw attention to bullying and harassment. In certain instances, we require self-reporting because it helps us understand that the person targeted feels bullied or harassed. In addition to reporting such behavior and content, we encourage people to use tools available on our platforms to help protect against it.

We also have a Bullying Prevention Hub, which is a resource for teens, parents, and educators seeking support for issues related to bullying and other conflicts. It offers step-by-step guidance, including information on how to start important conversations about bullying. Learn more about what we are doing to protect people from bullying and harassment here.

Note: This policy does not apply to individuals who are part of designated organizations under the Dangerous Organizations and Individuals policy or individuals who died prior to 1900.

Tier 1: Universal protections for everyone:

  • Everyone is protected from:

    • Unwanted contact that is:
      • Repeated, OR
      • Sexually harassing, OR
      • Is directed at a large number of individuals with no prior solicitation.
    • Calls for self-injury or suicide of a specific person, or group of individuals.
    • Attacks based on their experience of sexual assault, sexual exploitation, sexual harassment, or domestic abuse.
    • Statements of intent to engage in a sexual activity or advocating to engage in a sexual activity.
    • Severe sexualized commentary.
    • Derogatory sexualized photoshop or drawings
    • Attacks through derogatory terms related to sexual activity (for example: whore, slut).
    • Claims that a violent tragedy did not occur.
    • Claims that individuals are lying about being a victim of a violent tragedy or terrorist attack, including claims that they are:
      • Acting or pretending to be a victim of a specific event, or
      • Paid or employed to mislead people about their role in the event.
  • Threats to release an individual's private phone number, residential address, email address or medical records (as defined in the Privacy Violations policy).

  • Calls for, or statements of intent to engage in, bullying and/or harassment.

  • Content that degrades or expresses disgust toward individuals who are depicted in the process of, or right after, menstruating, urinating, vomiting, or defecating

  • Everyone is protected from the following, but for adult public figures, they must be purposefully exposed to:

    • Calls for death and statements in favor of contracting or developing a medical condition.
    • Celebration or mocking of death or medical condition.
    • Claims about sexually transmitted infections.
    • Derogatory terms related to female gendered cursing.
    • Statements of inferiority about physical appearance.

Tier 2: Additional protections for all Minors, Private Adults and Limited Scope Public Figures (for example, individuals whose primary fame is limited to their activism, journalism, or those who become famous through involuntary means):

  • In addition to the universal protections for everyone, all minors (private individuals and public figures), private adults and limited scope public figures are protected from:

    • Claims about sexual activity, except in the context of criminal allegations against adults (non-consensual sexual touching).
    • Content sexualizing another adult (sexualization of minors is covered in the Child Sexual Exploitation, Abuse and Nudity policy).
  • All minors (private individuals and public figures), private adults and limited scope public figures) are protected from the following, but for minor public figures, they must be purposefully exposed to:

    • Dehumanizing comparisons (in written or visual form) to or about:
      • Animals and insects, including subhuman creatures, that are culturally perceived as inferior.
      • Bacteria, viruses, microbes, and diseases.
      • Inanimate objects, including trash, filth, feces.
  • Content manipulated to highlight, circle, or otherwise negatively draw attention to specific physical characteristics (nose, ear, and so on).

  • Content that ranks them based on physical appearance or character traits.

  • Content that degrades individuals who are depicted being physically bullied (except in fight-sport contexts).

Tier 3: Additional protections for Private Minors, Private Adults, and Minor Involuntary Public Figures:

  • In addition to all the protections listed above, all private minors, private adults (who must self-report), and minor involuntary public figures are protected from:

    • Targeted cursing.
    • Claims about romantic involvement, sexual orientation or gender identity.
    • Calls for action, statements of intent, aspirational or conditional statements, or statements advocating or supporting exclusion.
    • Negative character or ability claims, except in the context of criminal allegations and business reviews against adults.
    • Expressions of contempt, disgust, or content rejecting the existence of an individual, except in the context of criminal allegations against adults.
  • When self-reported, private minors, private adults, and minor involuntary public figures are protected from the following:

    • First-person voice bullying.
    • Unwanted manipulated imagery.
    • Comparison to other public, fictional or private individuals on the basis of physical appearance.
    • Claims about religious identity or blasphemy
    • Comparisons to animals or insects that are not culturally perceived as intellectually or physically inferior (“tiger," “lion").
    • Neutral or positive physical descriptions.
    • Non-negative character or ability claims.
    • Attacks through derogatory terms related to a lack of sexual activity.

Tier 4: Additional protections for Private Minors only:

  • Minors get the most protection under our policy. In addition to all the protections listed above, private minors are also protected from:
    • Allegations about criminal or illegal behavior.
    • Videos of physical bullying against minors, shared in any context.

Bullying and harassment through pages, groups, events and messages

  • The protections of Tiers 1 through 4 are also enforced on pages, groups, events and messages.

For the following Community Standards, we require additional information and/or context to enforce:

Do not:

  • Post content that targets private individuals through unwanted Pages, Groups and Events. We remove this content when it is reported by the target or an authorized representative of the target.
  • Post content described above that would otherwise require the target to report the content or where the content an indicates that the poster is directly targeting the target (for example: the target is tagged in the post or comment). We will remove this content if we have confirmation from the target or an authorized representative of the target (alive or deceased) that the content is unwanted.
  • Post content calling for or stating an intent to engage in behavior that would qualify as bullying and harassment under our policies. We will remove this content when we have confirmation from the target or an authorized representative of the target that the content is unwanted.
  • Post content sexualizing a public figure. We will remove this content when we have confirmation from the target or an authorized representative of the target that the content is unwanted.
  • Initiate contact that is sexually harassing the recipient. We will remove any content shared in an unwanted context when we have a confirmation from the recipient, or an authorized representative of the recipient that contact is unwanted.
  • Remove directed mass harassment, when:
    • Targeting, via any surface, ‘individuals at heightened risk of offline harm’, defined as:
      • Human rights defenders
      • Minors
      • Victims of violent events/tragedies
      • Opposition figures in at-risk countries during election periods
      • Election officials
      • Government dissidents who have been targeted based on their dissident status
      • Ethnic and religious minorities in conflict zones
      • Member of a designated and recognizable at-risk group
    • Targeting any individual via personal surfaces, such as inbox or profiles, with:
      • Content that violates the bullying and harassment policies for private individuals or,
      • Objectionable content that is based on a protected characteristic
  • Disable accounts engaged in mass harassment as part of either
    • State or state-affiliated networks targeting any individual via any surface.
    • Adversarial networks targeting any individual via any surface with:
      • Content that violates the bullying and harassment policies for private individuals or,
      • Content that targets them based on a protected characteristic, or,
      • Content or behavior otherwise deemed to be objectionable in local context

Read lessRead more

Human Exploitation

Policy Rationale

In an effort to disrupt and prevent harm, we remove content that facilitates or coordinates the exploitation of humans, including human trafficking. We define human trafficking as the business of depriving someone of liberty for profit. It is the exploitation of humans in order to force them to engage in commercial sex, labor, or other activities against their will. It relies on deception, force, and coercion, and degrades humans by depriving them of their freedom while economically or materially benefiting others.

Human trafficking is multi-faceted and global; it can affect anyone regardless of age, socioeconomic background, ethnicity, gender, or location. It takes many forms, and any given trafficking situation can involve various stages of development. Due to the coercive nature of this abuse, victims cannot consent.

While we need to be careful not to conflate human trafficking and smuggling, they can be related and exhibit overlap. The United Nations defines human smuggling as the procurement or facilitation of illegal entry into a state across international borders. Without necessity for coercion or force, it may still result in the exploitation of vulnerable individuals who are trying to leave their country of origin, often in pursuit of a better life. Human smuggling is a crime against a state, relying on movement, and human trafficking is a crime against a person, relying on exploitation.

In addition to content condemning, raising awareness about, or news reporting on human trafficking or human smuggling issues, we allow content asking for or sharing information about personal safety and border crossing, seeking asylum or how to leave a country.

Do not post:

Content, activity or interactions that recruits people for, facilitates or exploits people through any of the following forms of human trafficking:

  • Sex trafficking (any commercial sexual activity with a minor or any commercial sexual activity with an adult involving force, fraud, or coercion)
  • Sales of children or illegal adoption
  • Orphanage trafficking and orphanage volun-tourism
  • Forced marriages
  • Labor exploitation (including bonded labor)
  • Domestic servitude
  • Non-regenerative organ trafficking not including organ removal, donation, or transplant in a non-exploitative organ donation context
  • Forced criminal activity (e.g. forced begging, forced drug trafficking)
  • Recruitment of child soldiers

Content where a third party actor recruits for, facilitates or benefits from (financially or otherwise) commercial sexual activity

Content that offers to provide or facilitate human smuggling

Content that asks for human smuggling services

We allow content that is otherwise covered by this policy when posted in condemnation, educational, awareness raising, or news reporting contexts.

For the following Community Standards, we require additional information and/or context to enforce:

We may remove content that offers a job in locations that are high-risk for labor exploitation when confirmed by law enforcement, local non-governmental organizations, or other trusted partners

Read lessRead more

578462292695485

Privacy Violations

Policy Rationale

Our services aim to protect the privacy and personal information of our users. We work hard to safeguard your personal identity and information and we do not allow people to post certain types of personal or confidential information about themselves or of others. We also provide people ways to report imagery that people believe to be in violation of their privacy rights.

We remove content that shares, offers, or solicits personally identifiable information or other private information that could lead to physical or financial harm, including financial, residential, and medical information, as well as private information obtained from illegal sources. We recognize that private information may become publicly available through news coverage, court filings, press releases, or other sources. When that happens, we may allow the information to be posted.

We have additional restrictions for paid content. Although we allow ads that provide a positive user experience by focusing on the product’s or service’s details, we remove ads that exploit users’ personal hardships, appear to make negative or inaccurate characterizations about them, or imply knowledge of sensitive personal information. For more information on our privacy rules for paid content, see our Advertising Standard on Privacy Violations and Personal Attributes.

We do not allow:

Content that shares or asks for private information, either on our services or through external links, as follows:

Personally identifiable information (PII)

  • Content sharing Personally Identifiable Information (information that uniquely identifies an individual) of the poster or others. This includes:

    • National identification numbers such as social security numbers (SSN), passport numbers or individual taxpayer identification numbers (ITIN).
    • Government IDs of law enforcement, military or security personnel.
    • Records or official documentation of civil registry information such as marriage, birth, death, name change or gender recognition documents
    • Immigration and work status documents such as green cards, work permits or visas
  • Content asking for Personally Identifiable Information of others

Personal Contact Information

  • Content sharing personal contact information of others, except when made public by the individual or when shared or solicited to promote charitable causes, facilitate finding missing people, animals, or owners of missing objects, or contact a business or service provider (unless it is established that the personal contact information is shared without the consent of the individual).

Residential information

  • Content sharing full private residential addresses of others, including building name or pins on a map identifying the address (even if the pins are in an off-platform link), except in the following contexts:

  • when shared to promote charitable causes, facilitate finding missing people, animals, or owners of missing objects, or contact a business or service providers

  • when the residence is an official residence or embassy provided to a high-ranking public official

  • Content sharing partial private residential addresses of others (except when the residence is an official residence or embassy provided to a high-ranking public official):

  • When shared in the context of organizing protests or surveillance of the resident and the location of the residence is identified by any one of the following:

  • Street

  • City or neighborhood (only for cities with fewer than 50,000 residents)

  • Postal code

  • GPS pins or pins on a map identifying any of these (even if the pins are in an off-platform link)

  • Imagery that displays the external view of private residences if all of the following conditions apply:

  • The residence is a single-family home, or the resident's unit number/building name is identified in the image/caption.

  • The location of the residence is identified by any one of the following:

  • Street

  • City or neighborhood (only for cities with fewer than 50,000 residents)

  • Postal code

  • GPS pins or pins on a map identifying any of these (even if the pins are in an off-platform link)

  • The content identifies the resident(s).

  • Either that resident objects to the exposure of their private residence, or there is context of organizing protests against the resident.

  • The imagery of the residence is not being shared because the residence is the focus of a news story (except when shared in the context of organizing protests against the resident)

  • Content asking for private residential information of others (except when the residence is an official residence or embassy provided to a high-ranking public official)

  • Content asking for location of safe houses or exposes information about safe houses by sharing any of the below (except when the safe house is actively promoting information about their facility)

  • Actual address (Note: "Post Box only" is allowed)

  • Images of the safe house

  • Identifiable city/neighborhood of the safe house

  • Information exposing the identity of the safe house residents

Medical information

  • Content sharing or asking for medical, psychological, biometric, or genetic/hereditary information of others (including when displayed visually or shared through audio or video),when it is clear that the information comes from medical records or other official documents.

Financial information

  • Content sharing or asking for personal financial information about the poster or others, defined as information about a person’s individual finances including:
    • Non-public financial records or statements.
    • Bank account numbers with security or pin codes.
    • Digital payment method information with log in details, security or pin codes.
    • Credit or debit card information with validity dates or security pins or codes.
  • Content sharing or asking for non-public financial information of a business or organization, defined as information about a business’ or organization’s finances, except when originally shared by the organization itself (including subsequent shares with the original context intact) or shared through public reporting requirements (for example as required by stock exchanges or regulatory agencies), including:
    • Non-public financial records or statements
    • Bank account numbers accompanied by security or pin codes.
    • Digital payment method information accompanied by log in details, security or pin codes.

Information obtained from hacked sources

  • Content claimed by the poster or confirmed to come from a hacked source, regardless of whether the affected person is a public figure or a private individual.

The following content also may be removed:

  • A reported photo or video of people where the person depicted in the image is:
    • A minor under 13 years old, and the content was reported by the minor or a parent or legal guardian.
    • A minor between 13 and 18 years old, and the content was reported by the minor.
    • An adult, where the content was reported by the adult from outside the United States and applicable law may provide rights to removal.
    • Any person who is incapacitated and unable to report the content on their own.

For the following Community Standards, we require additional information and/or context to enforce:

  • Depictions of Individuals & Medical Facilities, defined as content that displays an individual in a medical or health facility or a private individual or minor entering or exiting a medical or health facility, and is reported by:

    • The person in the image
    • A representative of the person in the image
    • The medical or health facility with care responsibilities for the person in the image, or
    • The medical or health facility that employs the person in the image
  • Source material that purports to reveal nonpublic information relevant to an election shared as part of a foreign government influence operation.

    • We remove reporting on such a leak by state-controlled media entities from the country behind the leak.

Read lessRead more

Hate Speech

Policy Rationale

We believe that people use their voice and connect more freely when they don’t feel attacked on the basis of who they are. That is why we don’t allow hate speech on Facebook, Instagram, or Threads. It creates an environment of intimidation and exclusion, and in some cases may promote offline violence.

We define hate speech as direct attacks against people — rather than concepts or institutions— on the basis of what we call protected characteristics (PCs): race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and serious disease. Additionally, we consider age a protected characteristic when referenced along with another protected characteristic. We also protect refugees, migrants, immigrants, and asylum seekers from the most severe attacks, though we do allow commentary on and criticism of immigration policies. Similarly, we provide some protections for non- protected characteristics, such as occupation, when they are referenced along with a protected characteristic. Sometimes, based on local nuance, we consider certain words or phrases as frequently used proxies for PC groups.

We define a hate speech attack as dehumanizing speech; statements of inferiority, expressions of contempt or disgust; cursing; and calls for exclusion or segregation. We also prohibit the use of harmful stereotypes, which we define as dehumanizing comparisons that have historically been used to attack, intimidate, or exclude specific groups, and that are often linked with offline violence. We also prohibit the usage of slurs that are used to attack people on the basis of their protected characteristics. Attacks are separated into two tiers of severity, described below.

We have additional restrictions for paid content.

However, we recognize that people sometimes share content that includes slurs or someone else’s hate speech in order to condemn the speech or report on it. In other cases, speech, including slurs, that might otherwise violate our standards is used self-referentially or in an empowering way. People also sometimes express contempt or curse at a gender in the context of a romantic break-up. Other times, they use gender-exclusive language to control membership in a health or positive support group, such as a breastfeeding group for women only. Our policies are designed to allow room for these types of speech but require people to clearly indicate their intent. Where intention is unclear, we may remove content.

Note: Violent speech targeting people on the basis of their protected characteristics is covered in our Violence and Incitement Policy.

Learn more about our approach to hate speech.

Do not post:

Tier 1

Content targeting a person or group of people (including all groups except those who are considered non-protected groups described as having carried out violent crimes or sexual offenses or representing less than half of a group) on the basis of their aforementioned protected characteristic(s) or immigration status in written or visual form with:

  • Dehumanizing speech in the form of comparisons to or generalizations about:

  • Animals and pathogens

  • Insects (including but not limited to: cockroaches, locusts)

  • Animals in general or specific types of animals that are culturally perceived as inferior (including but not limited to: Black people and apes or ape-like creatures; Jewish people and rats; Muslim people and pigs; Mexican people and worms)

  • Certain Inanimate Objects and Non-Human States:

  • Certain objects (women as household objects or property or objects in general; Black people as farm equipment; transgender or non-binary people as “it”)

  • Feces (including but not limited to: shit, crap)

  • Filth (including but not limited to: dirt, grime, or saying "[protected characteristic or quasi-protected characteristic] has bad hygiene")

  • Bacteria, viruses, or microbes

  • Disease (including but not limited to: cancer, sexually transmitted diseases)

  • Subhumanity (including but not limited to: savages, devils, monsters, primitives)

  • Criminals

  • Sexual predators (including but not limited to: Muslim people having sex with goats or pigs)

  • Violent criminals (including but not limited to: terrorists, murderers, members of hate or criminal organizations)

  • Other criminals (including but not limited to “thieves,” “bank robbers,” or saying “All [protected characteristic or quasi-protected characteristic] are ‘criminals’”).

  • Statements in the form of calls for action or statements of intent to inflict, aspirational or conditional statements about, or statements advocating or supporting harm in the following ways:

  • In favor of contracting a disease

  • In favor of experiencing a natural disaster

  • Calls for self-injury or suicide

  • Calls for death without a perpetrator or method

  • Calls for accidents and other physical harms caused either by no perpetrator or by a deity

  • Statements denying existence (including but not limited to: "[protected characteristic(s) or quasi-protected characteristic] do not exist", "no such thing as [protected charactic(s) or quasi-protected characteristic]" or “[protected characteristic(s) or quasi-protected characteristic] shouldn’t exist”)

  • Harmful stereotypes historically linked to intimidation, exclusion, or violence on the basis of a protected characteristic, such as Blackface; Holocaust denial; claims that Jewish people control financial, political, or media institutions; and references to Dalits as menial laborers

  • Mocking the concept, events or victims of hate crimes even if no real person is depicted in an image.

  • Mocking people on the basis of their Protected Characteristics or Quasi-Protected Characteristics for having or experiencing a disease.

  • Content that describes or negatively targets people with slurs, where slurs are defined as words that inherently create an atmosphere of exclusion and intimidation against people on the basis of a protected characteristic, often because these words are tied to historical discrimination, oppression, and violence. They do this even when targeting someone who is not a member of the PC group that the slur inherently targets.

Tier 2

Content targeting a person or group of people on the basis of their protected characteristic(s) (in written or visual form):

  • Generalizations that state inferiority in the following ways:

    • Physical appearance, including but not limited to: ugly, hideous.
    • Mental characteristics are defined as those about:
      • Intellectual capacity, including but not limited to: dumb, stupid, idiots.
      • Education, including but not limited to: illiterate, uneducated.
      • Mental health, including but not limited to: mentally ill, retarded, crazy, insane.
    • Moral characteristics are defined as those about:
      • Character traits culturally perceived as negative, including but not limited to: coward, liar, arrogant, ignorant.
      • Derogatory terms related to sexual activity, including but not limited to: whore, slut, perverts.
  • Other statements of inferiority, which we define as:

    • Expressions about being less than adequate, including but not limited to: worthless, useless.
    • Expressions about being better/worse than another protected characteristic, including but not limited to: "I believe that males are superior to females."
    • Expressions about deviating from the norm, including but not limited to: freaks, abnormal.
  • Expressions of contempt, except in a romantic break-up context, and disgust, which we define as:

    • Self-admission to intolerance on the basis of a protected characteristics, including but not limited to: homophobic, islamophobic, racist.
    • Expressions of hate, including but not limited to: "I despise","I hate", " I can't stand".
    • Expressions of dismissal, including but not limited to: "I don´t respect", "I don't like", " I don´t care for"
    • Expressions that suggest the target causes sickness, including but not limited to: vomit, throw up.
    • Expressions of repulsion or distaste, including but not limited to: vile, disgusting, yuck.
  • Targeted cursing, except certain gender-based cursing in a romantic break-up context, defined as:

    • Referring to the target as genitalia or anus, including but not limited to: cunt, dick, asshole.
    • Profane terms or phrases or other curses with the intent to insult, including but not limited to: fuck, bitch, motherfucker.
    • Terms or phrases calling for engagement in sexual activity, or contact with genitalia, anus, feces or urine, including but not limited to: suck my dick, kiss my ass, eat shit.
  • Exclusion or segregation in the form of calls for action, statements of intent, aspirational or conditional statements, or statements advocating or supporting defined as:

  • Explicit exclusion, which means things like expelling certain groups or saying they are not allowed or segregation.

  • Political exclusion, which means denying the right to political participation.

  • Economic exclusion, which means denying access to economic entitlements and limiting participation in the labor market.

  • Social exclusion, which means things like denying access to spaces (physical and online)and social services, except for gender-based exclusion in health and positive support Groups.

For the following Community Standards, we require additional information and/or context to enforce:

Do not post:

  • Content explicitly providing or offering to provide products or services that aim to change people’s sexual orientation or gender identity.
  • Content attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic. Meta looks at a range of signs to determine whether there is a threat of harm in the content. These include but are not limited to: content that could incite imminent violence or intimidation; whether there is a period of heightened tension such as an election or ongoing conflict; and whether there is a recent history of violence against the targeted protected group. In some cases, we may also consider whether the speaker is a public figure or occupies a position of authority.
  • Content targeting a person or group of people on the basis of their protected characteristic(s) with claims that they have or spread the novel coronavirus, are responsible for the existence of the novel coronavirus, are deliberately spreading the novel coronavirus.

In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.

Read lessRead more

Violent and Graphic Content

Policy Rationale

We understand that people have different sensitivities with regard to graphic and violent imagery. To protect users from such content, we remove the most graphic content and add warning labels to other graphic content so that people are aware it may be sensitive or disturbing before they click through. We may also restrict the ability for users under 18 to view such content (or “age-gate” the content).

We recognize that users may share content in order to shed light on or condemn acts such as human rights abuses or armed conflict. Our policies consider when content shared in this context and allow room for discussion and awareness raising accordingly.

In ads, we provide additional protections. For example, content that has been deemed sensitive or disturbing is not eligible to run in ads. We also prohibit ads from including images and videos that are shocking, gruesome, or otherwise sensational.

Do not post:

Imagery of people

Videos of people, living or deceased, in non-medical contexts, depicting:

  • Dismemberment.
  • Visible innards, such as exposed organs, bones, or muscle tissue on living or deceased persons;
  • Burning or charred persons; or
  • Throat-slitting.

Live-streams of capital punishments.

Sadistic Remarks

Sadistic remarks are commentary – such as captions or comments – expressing joy or pleasure from the suffering or humiliation of people or animals.

We remove

  • Sadistic remarks made toward imagery (both videos and still images) that otherwise receives a warning screen under this policy, advising people that the content may be disturbing; unless the imagery depicts acts of self-defense (e.g., video of someone defending themselves from armed robbery) or is in a medical context (e.g., an image of medical professionals performing surgery).
  • Sadistic remarks made towards the following imagery that otherwise receives a warning screen under this policy advising people it may be sensitive:
    • Imagery depicting a person’s violent death or life threatening event when the act of violence is committed by uniformed personnel performing a police function;
    • Imagery depicting acts of brutality (e.g., acts of violence or lethal threats on forcibly restrained subjects) by uniformed personnel performing a police function;
    • Imagery depicting fetuses and babies outside of the womb that are deceased;
  • Explicit sadistic remarks made towards the suffering of animals depicted in imagery, and imagery depicting animals going from live to dead.
  • Offering or soliciting imagery that is deleted or receives a warning screen under this policy, when accompanied by sadistic remarks.

For the following content, we include a warning screen so that people are aware the content may be disturbing. We also limit the ability to view the content to adults, ages 18 and older:

Imagery of people

Videos of people, living or deceased, in medical contexts depicting:

  • Dismemberment.
  • VIsible innards, such as exposed organs, bones, or muscle tissue on living or deceased persons;
  • Burning or charred persons, including in contexts of cremation; or
  • Throat-slitting.

Still images of people, living or deceased, depicting:

  • Dismemberment.
  • Visible innards, such as exposed organs, bones, or muscle tissue on living or deceased persons;
  • Burning or charred persons; or
  • Throat-slitting.

Imagery (both videos and still images) depicting a persons’ violent death (including their moment of death or the aftermath) or a person experiencing a life threatening event (such as being struck by a car, falling from a great height, or experiencing other possibly-fatal physical injury).

Imagery depicting capital punishment of a person (excluding live-streams).

Imagery depicting acts of brutality (e.g., acts of violence or lethal threats on forcibly restrained subjects) committed against a person or group of people.

Imagery depicting non-medical foreign objects (e.g., knives, nails, or other metal objects) piercing a person’s skin.

Imagery depicting a person’s broken, bleeding teeth, removed teeth where blood is present; or the insertion of foreign objects into the teeth or gums.

Imagery of animals

Any imagery of animals, still living or going from live to dead, – depicting dismemberment, visible innards, burning or charring, or being boiled alive.

Any imagery of animals, when there are visible innards or dismemberment of non-regenerating body parts, unless in the wild.

For the following content, we include a label so that people are aware the content may be sensitive:

Imagery of people

Imagery (both videos and still images) depicting non-medical foreign objects (e.g., knives, nails, or other metal objects) piercing a person’s skin in a religious or cultural context.

Imagery depicting visible innards in a birthing context.

Imagery depicting a person’s violent death or life threatening event when the act of violence is committed by uniformed personnel performing a police function.

Imagery depicting acts of brutality (e.g., acts of violence or lethal threats on forcibly restrained subjects) by uniformed personnel performing a police function.

Imagery depicting fetuses and babies outside of the womb that are deceased, unless another person is present in the image.

Imagery, in a medical context, depicting a person’s broken, bleeding teeth, removed teeth where blood is present; or the insertion of foreign objects into the teeth or gums.

Imagery of animals

Imagery depicting already-dead animals, if there is dismemberment, visible innards, burning or charring, or where blood is present.

Imagery depicting animals going from live to dead if there is no dismemberment, or visible innards, burning or charring, or boiling alive.

Imagery depicting people committing acts of brutality (e.g., acts of violence or lethal threats on forcibly restrained subjects) on living animals.

For the following Community Standards, we require additional information and/or context to enforce:

We remove:

Imagery depicting the violent death of someone when a family member of the deceased requests its removal.

Video which includes audio, but not a visual depiction, of a person’s violent death when the person’s death is confirmed by law enforcement record, death certificate, Trusted Partner report, or media report and a family member of the deceased requests its removal.

Video of charred or burning humans in the context of self-immolation as an act of protest.

Read lessRead more

Adult Nudity and Sexual Activity

Policy Rationale

We restrict the display of nudity or sexual activity because some people in our community may be sensitive to this type of content, particularly due to cultural background or age.

We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause or for educational or medical reasons. Where appropriate and such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding and photos of post-mastectomy scarring. We also allow real world art that depicts nudity such as photographs of paintings, sculptures, etc. We default to removing sexual imagery to prevent the sharing of non-consensual or underage content.

Under this policy, we remove real photographs and videos of nudity and sexual activity, AI- or computer-generated images of nudity and sexual activity, and digital imagery, regardless of whether it looks “photorealistic” (as in, it looks like a real person). As noted above, we also make careful allowances for real world art and certain medical, educational, and awareness-raising content, and these are detailed in the policy.

Content relating to child nudity is addressed in our Community Standard on Child Sexual Exploitation, Abuse and Nudity.

We do not allow:

  • Imagery, and digital imagery, of adult nudity, if it depicts:
    • Visible genitalia (including when obscured by pubic hair) except when labeled with a sensitive warning screen in a medical or health context (for example, birth giving and after-birth moments, gender confirmation surgery, examination for cancer or other diseases)
    • Visible anuses and/or fully nude close-ups of buttocks except when labeled with a sensitive warning screen in a medical or health context or when edited onto a public figure
    • Uncovered female nipples, except in a breastfeeding, mastectomy, medical, health, or act of protest context
    • Note that we allow all the above in the context of famine, genocide, war crimes, or crimes against humanity
  • Imagery of adult sexual activity, including:
    • Explicit sexual activity or stimulation
      • Explicit sexual intercourse or oral sex, as indicated by a person’s mouth or genitals entering or in contact with another person's genitals or anus, when at least one person's genitals or anus is visible
      • Explicit stimulation of a person’s genitals or anus, as indicated by stimulation, or the insertion of sex toys into the person’s genitals or anus, when the contact with the genitals or anus is directly visible
    • Implicit sexual activity or stimulation, except when labeled with a sensitive warning screen in a medical, health or sexual wellness context; or when limited to adults, ages 18 years or older in promotional content, recognised fictional images or images with indicators of fiction:
      • Implicit sexual intercourse or oral sex, as indicated by a person’s mouth or genitals entering or in contact with another person's genitals or anus, when the genitals or anus and/or the entry or contact is not directly visible
      • Implicit stimulation of a person’s genitals or anus, as indicated by stimulation, or the placement of sex toys above or insertion of sex toys into the person’s genitals or anus, when the genitals or anus, stimulation, placement, and/or insertion is not directly visible
    • Other sexual activity or stimulation, except when labeled with a sensitive warning screen in a medical or health context, or when limited to adults, ages 18 years or older, promotional content, recognised fictional images or images with indicators of fiction:
      • Erections
      • Presence of by-products of sexual activity
      • Sex toys placed upon or inserted into mouth
      • Stimulation of visible human nipples
      • Squeezing female breasts, defined as a grabbing motion with curved fingers that shows both marks and clear shape change of the breasts. We allow squeezing in breastfeeding contexts.
    • Imagery depicting fetish that involves:
      • Acts that are likely to lead to the death of a person or animal
      • Dismemberment
      • Cannibalism
      • Feces, urine, spit, snot, menstruation or vomit
      • Bestiality
      • Incest
    • Digital imagery of adult sexual activity, except when posted in the context of medical awareness, scientific discourse or discussion of sexual health, or when it meets one of the criteria below and viewing is limited to adults, ages 18 years or older.
  • Extended audio of sexual activity

For the following content, we include a label so that people are aware the content may be sensitive:

  • Imagery, and digital imagery, of visible genitalia, fully nude close-ups of buttocks, or visible anuses, when shared in a medical or health context. This can include, for example:

  • Birth-giving and after-birth giving moments

  • Gender confirmation surgery

  • Self-examination for cancer or other disease

  • Imagery of implicit/other sexual activity or stimulation when shared in a medical or health context

  • Imagery of implicit sexual activity or stimulation in sexual wellness context

For the following content, we limit the ability to view the content to adults, ages 16 and older:

  • Imagery depicting near nudity such as nudity covered only by digital overlay or an opaque object and nudity obscured by see-through clothing
  • Imagery depicting persons making sexual poses, defined as poses simulating sexual activity or where groin, buttock or female breast(s) are in focus (including in real world art and digital imagery)
  • Imagery depicting sex-related activity (including in real world art and digital imagery) such as kissing with visible tongue and sexual or erotic dancing
  • Imagery depicting gestures that signify genitalia, masturbation, oral sex, or sexual intercourse (including in real world art and digital imagery)
  • Imagery depicting logos, screenshots or video clips of known pornographic websites
  • Content that contains sexual audio

For the following content, we limit the ability to view the content to adults, ages 18 and older:

  • Real-world art, where

  • Imagery depicts implicit, explicit, or other sexual activity or stimulation except when posted in the context of medical awareness, scientific discourse or discussion of sexual health

  • Imagery depicts bestiality, provided it is shared neutrally or in condemnation and the people or animals depicted are not real.

  • Implicit/other sexual activity or stimulation in promotional content, recognized fictional images or with indicators of fiction

  • Digital imagery and real world art of adult sexual activity, where:

  • The content was posted in a satirical or humorous context

  • Only body shapes or contours are visible

For the following Community Standards, we require additional information and/or context to enforce:

  • In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.

Read lessRead more

Adult Sexual Solicitation and Sexually Explicit Language

Policy Rationale

As noted in the Adult Sexual Exploitation policy, people use our services to discuss and draw attention to sexual violence and exploitation. We recognize the importance of and allow for this discussion.We also allow for the discussion of sex worker rights advocacy and sex work regulation. We draw the line, however, when content facilitates sexual encounters or commercial sexual services between adults. We do this to avoid facilitating transactions that may involve trafficking, coercion and non-consensual sexual acts.

We also restrict sexually-explicit language that may lead to sexual solicitation because some audiences within our global community may be sensitive to this type of content, and it may impede the ability for people to connect with their friends and the broader community.

We do not allow:

Content that offers or asks for prostitution, defined as offering oneself or asking for sexual activities in exchange for money or anything of value such as:

  • Offering or asking for sexual activity (for example, escort services, sexual/erotic massages, sex chats/conversations, fetish/domination services)
  • Slang terms for prostitution combined with an ask or offer of availability, price, hint at price, or compensation, location, or contact information
  • Content that engages in explicit or implicit sexual solicitation combined with a price, hint at price, or compensation
  • Content that recruits or offers other people for third-party commercial sex work is separately considered under the Human Exploitation policy.

Content that engages in explicit sexual solicitation by, offering or asking for sexual activities such as:

  • Sex or sexual partners (including partners who share fetish or sexual interests).
  • Sex chat or conversations.
  • Nude photos/videos/imagery/sexual fetish items.
  • Offers or asks that include sexual slang terms.

Content that engages in implicit or indirect sexual solicitation (defined as sharing contact information, or suggesting to be contacted directly) with a sexually suggestive element. Sexually suggestive elements can include content prohibited under the Adult Nudity and Sexual Activity policy or mentions or depictions of regionalized sexual slang, commonly sexualized emojis, sexually suggestive poses, sexual roles, sex positions, fetish scenarios, state of arousal, etc.

Content that offers or asks for pornographic material including, but not limited to, sharing of links to external pornographic websites.

Sexually explicit language that goes into graphic detail about:

  • A state of sexual arousal (e.g., wetness or erection)
  • An act of sexual intercourse (e.g., sexual penetration, self-pleasuring or exercising fetish scenarios)
  • The above does not include content shared in a humorous, satirical or educational context, as a sexual metaphor or as sexual cursing

We allow content that is otherwise covered by this policy when posted in condemnation, educational, awareness raising or news reporting contexts. We also do not prohibit under the policy content expressing desire for sexual activity, promoting sex education, discussing sexual practices or experiences, or offering classes or programs that teach about sex.

For the following content, we limit the ability to view the content to adults, ages 18 and older:

  • Content expressing desire for adult sexual activity, or discussing sexual practices or experiences, even without sexual solicitation

  • Sexual metaphors or sexual cursing that goes into graphic detail about: A state of sexual arousal (e.g., wetness or erection)

  • A state of sexual arousal (e.g., wetness or erection)

  • An act of sexual intercourse (e.g., sexual penetration, self-pleasuring or exercising fetish scenarios)

For the following Community Standards, we require additional information and/or context to enforce:

  • In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.

Read lessRead more

Account Integrity

Policy Rationale

In order to maintain a safe environment and empower free expression, we restrict or remove accounts that are harmful to the community. We have built a combination of automated and manual systems to restrict and remove accounts that are used to egregiously or persistently violate our policies across any of our products.

Because account removal is a serious action, whenever possible, we aim to give our community opportunities to learn our rules and follow our Community Standards. For example, a notification is issued each time we remove content, and in most cases we also provide people with information about the nature of the violation and any restrictions that are applied. Our enforcement actions are designed to be proportional to the severity of the violation, the history of violations on the account, and the risk or harm posed to the community. Continued violations, despite repeated warnings and restrictions, or violations that pose severe safety risks will lead to an account being disabled.

Learn more about how Meta enforces its policies and restricts accounts in the Transparency Center.

We may restrict or disable accounts, other entities (Pages, groups, events) or business assets (Business Managers, ad accounts) that:

  • Violate our Community Standards involving egregious harms, including those we refer to law enforcement due to the risk of imminent harm to individual or public safety
  • Violate our Community Standards involving any harms that warrant referral to law enforcement due to the risk of imminent harm to individual of public safety
  • Violate our Advertising Standards involving deceptive or dangerous business harms
  • Persistently violate our Community Standards by posting violating content and/or managing violating entities or business assets
  • Persistently violates our Advertising Standards
  • Activity or behavior indicative of a clear violating purpose

We may restrict or disable accounts, other entities (Pages, groups, events) or business assets (Business Managers, ad accounts) that are:

  • Owned by the same person or entity as an account that has been disabled
  • Created or repurposed to evade a previous account or entity removal, including those assessed to have common ownership and content as previously removed accounts or entities
  • Created to contact a user that has blocked an account
  • Otherwise used to evade our enforcement actions or review processes

We may restrict or disable accounts, other entities (Pages, groups, events) or business assets (Business Managers, ad accounts) that demonstrate:

  • Close linkage with a network of accounts or other entities that violate or evade our policies
  • Coordination within a network of accounts or other entities that persistently or egregiously violate our policies
  • Activity or behavior indicative of a clear violating purpose through a network of accounts

We will work to restrict or disable accounts or other entities (Pages, groups, events), or business assets (Business Managers, ad accounts) that engage in off-platform activity that can lead to harm on our platform, including those:

  • Owned by a convicted sex offender, convicted of offences related to the sexual abuse of children or adults
  • Owned by a Designated Entity or run on their behalf
  • Prohibited from receiving our products, services or software under applicable laws

In the following scenarios, we may request additional information about an account to ascertain ownership and/or permissible activity:

  • Compromised accounts
  • Creating or using an account or other entity through automated means, such as scripting (unless the scripting activity occurs through authorized routes and does not otherwise violate our policies)
  • Empty accounts with prolonged dormancy

Read lessRead more

Authentic Identity Representation

Policy Rationale

Authenticity is the cornerstone of our community. We believe that authenticity helps create a community where people are accountable to each other, and to Meta, in meaningful ways. We want to allow for the many ways that identity is expressed across our global community, while preventing impersonation and identity misrepresentation. To maintain a safe and open environment where people can trust one another and build community, we do not allow for the creation of accounts or profiles that are created or used to deceive others.

On Facebook, we require people to create one account using the name they go by in everyday life that represents their authentic identity. We created Additional Profiles to help people express different parts of their identity, such as their interests or businesses.

We do not allow the use of our services and will restrict or disable Facebook, Instagram, and Threads accounts or other Facebook entities (such as Pages, groups) that:

  • Belong to underage children
  • Impersonate another person or entity by:
    • Using their image(s), name, or likeness with the aim to deceive others
    • Speaking in the voice of another person or entity for whom the user is not authorized to do so (e.g. by creating a Page or Profile)
  • Engage in identity misrepresentation to mislead or deceive others, evade enforcement, or violate our Community Standards. We consider a number of factors when assessing misleading identity misrepresentation, such as:
    • Repeated or significant changes to identity details, such as name or age
    • Misleading profile information, such as bio details and profile location
    • Using stock imagery
  • Use a name containing violations of our Community Standards.

On Facebook, we will seek further information before taking actions ranging from temporarily restricting to permanently disabling profiles or accounts if you:

  • Provide a false date of birth
  • Use a name that is not the authentic name you go by in everyday life
  • Create a single account that represents or is used by more than one person
  • Create or maintain multiple Facebook accounts
  • Create an account that represents a non-human entity, such as a business, pet, or fictional character
  • Maintain empty profiles with prolonged dormancy

Read lessRead more

Spam

Policy Rationale

We do not allow content that is designed to deceive, mislead, or overwhelm users in order to artificially increase viewership. This content detracts from people's ability to engage authentically on our platforms and can threaten the security, stability and usability of our services. We also seek to prevent abusive tactics, such as spreading deceptive links to draw unsuspecting users in through misleading functionality or code, or impersonating a trusted domain.

Online spam is a lucrative industry. Our policies and detection must constantly evolve to keep up with emerging spam trends and tactics. In taking action to combat spam, we seek to balance raising the costs for its producers and distributors on our platforms, with protecting the vibrant, authentic activity of our community.

We do not allow:

  • Posting, sharing, engaging with content or creating accounts, Groups, Pages, Events or other assets, either manually or automatically, at very high frequencies.

    • We may place restrictions on accounts that are acting at lower frequencies when other indicators of Spam (e.g., posting repetitive content) or signals of inauthenticity are present.
  • Attempting to or successfully selling, buying, or exchanging platform assets, such as accounts, groups, pages, etc.

  • Attempting to or successfully selling, buying, or exchanging site privileges, such as admin or moderator roles, or permission to post in specific spaces.

  • Attempting to or successfully selling, buying, or exchanging content for something of monetary value, except clearly identified Branded Content, as defined by our Branded Content Policy.

  • Attempting to or successfully selling, buying, or exchanging for engagement, such as likes, shares, views, follows, clicks, use of specific hashtags, etc. This includes:

    • Offering giveaways (i.e., offering others a chance to win) of cash or cash equivalents in exchange for engagement. (e.g., “Anyone that likes my page will be entered to win $500”)
    • Offering to provide anything of monetary value in exchange for engagement. (e.g., “If you like my page, I will give you an iPhone!”)
  • Requiring or claiming that users are required to engage with content (e.g., liking, sharing) before they are able to view or interact with promised content.

  • Sharing deceptive or misleading URLs, domains, or applications including:

    • Cloaking: Cloaking is any attempt to circumvent our content policies by intentionally presenting different off-platform content, such as URLs or applications, to our integrity systems versus what is shown to users.
    • Misleading Links: Content containing a link that promises one type of content but delivers something substantially different.This can include content in a promised app or software.
    • Deceptive redirect behavior: Websites that require an action (e.g. captcha, watch ad, click here) in order to view the expected landing page content and the domain name of the URL changes after the required action is complete, or automatically redirects users to a substantially different domain without any user action.
    • Like/share-gating: Requiring users to engage (in the form of likes, shares, follows, or any other public-facing form of engagement) to gain access to specific, exclusive content.
    • Deceptive platform functionality - Mimicking the features or functionality of our services, such as mimicking fundraising, in-line polls, play buttons, or the Like button where that functionality does not exist or does not function as expected, in order to get a user to follow a link.
    • Deceptive landing page functionality: Websites that have a misleading user interface, which results in accidental traffic being generated (e.g. pop-ups/unders, clickjacking, etc.).This includes tactics like trapping, where irrelevant pop-ups appear when a person attempts to leave the landing page.
    • Landing page or domain impersonation - An off-platform landing page, URL, or external website or domain that pretends to be a reputable brand or service by using a name, domain or content that features typos, misspellings or other means to impersonate well-known websites, domains or brands using a landing page similar to another, trusted site.
    • Other deceptive uses of URLs or links that are substantially similar to the above.
  • Notwithstanding the above, we do not prohibit:

    • Cross promotion that is not triggered by payment to a third party
    • Transferring admin or moderation responsibilities for a page or group to another user based on their interest in the page or group, rather than an exchange of value.
    • Posting or sharing clearly identified Branded Content.

Read lessRead more

Cybersecurity

Policy Rationale

We recognize that the safety of our users includes the security of their personal information, accounts, profiles and other Meta entities they may manage, as well as our products and services more broadly. Attempts to gather sensitive personal information or engage in unauthorized access by deceptive or invasive methods are harmful to the authentic, open and safe atmosphere that we want to foster.

We do not allow:

Attempts to compromise or access accounts via unauthorized means, including:

  • Accessing accounts, profiles, or other Meta entities other than one’s own through deceptive means or without explicit permission from the account, profile, or entity owner.
  • Obtaining, acquiring or requesting another user’s login information, personal information, or other sensitive user information for the purpose of unauthorized access, including through the following tactics:
    • Phishing, defined as the practice of creating communications or websites that are designed to look like more trusted or reputable communications or websites for the purpose of fraudulently acquiring sensitive user information.
    • Social Engineering, such as repeated or consistent attempts to harvest or acquire the answers to common account or password recovery questions.
    • Malware, Greyware, Spyware or other malicious code, as described below.

Attempts to share, develop, host, or distribute malicious or harmful code, including:

  • Encouraging or deceiving users to download or run files, apps, or programs that will compromise a user’s online or data security, including, but not limited to:

    • Malware, defined as code or software designed to harm or gain unauthorized access to systems. This includes programs designed to harm computer systems, as well as software designed to extract money from victims, like ransomware.
    • Spyware, defined as code or software that collects data on users and sends it to third parties without the informed consent of the user, or that uses the data for illicit purposes (e.g., sextortion, blackmail, illicit access to systems).
    • Greyware, defined as code or software which detracts from the use of hardware or software and may be difficult to remove from a computer system or network.
  • Creating, sharing or hosting malicious software including browser extensions and mobile applications, on or off the platform that put our users or products and services at risk.

  • Sharing or advertising software or products that enable people to circumvent security systems, including software that encourages hacking of software, passwords, or credentials

  • Providing online infrastructure, including web hosting services, domain name system servers and ad networks that enables abusive links such that a majority of those links on our services violate the spam or cybersecurity sections of the Community Standards.

Read lessRead more

Inauthentic Behavior

Policy Rationale

In line with our commitment to authenticity, we don't allow people to misrepresent themselves on our services, use fake accounts, artificially boost the popularity of content, or engage in behaviors designed to enable other violations under our Community Standards. Inauthentic Behavior refers to a variety of complex forms of deception, performed by a network of inauthentic assets controlled by the same individual or individuals, with the goal of deceiving Meta or our community or to evade enforcement under the Community Standards.

Where adversarial threat actors use fake accounts to engage in sophisticated Inauthentic tactics in order to influence public debate - they engage in what we’ve defined as Coordinated Inauthentic Behavior - or coordinated efforts to manipulate public debate for a strategic goal, in which fake accounts are central to the operation. This violating behavior receives a more severe and often bespoke response, in keeping with their more substantial and sophisticated efforts to deceive. Whenever possible, we share our findings about networks of Coordinated Inauthentic Behavior in our Quarterly Adversarial Threat Reports, found here. These reports are not meant to cover the entire universe of enforcements under the Inauthentic Behavior policy, but help inform our community’s understanding of the evolving nature of threats we face in this space.

While Inauthentic Behavior is often associated with civic or political content, and we are committed to preventing Inauthentic Behavior in the context of elections - these enforcement actions and standards apply agnostic of content, political or otherwise. This policy is intended to protect the authenticity of debate and discussion on our services, and create a space where people can trust the people and communities they interact with.

We do not allow:

  • The creation, use, or claimed use of Inauthentic Meta Assets (Accounts, Pages, Groups, etc.) in order to:

    • Deceive Meta or our users about:

    • The identity, purpose, or origin of an audience or the entity that they represent; or

    • the popularity of content or assets on our services; or

    • a Meta asset’s ownership or control network.

    • To Evade enforcement under the Community Standards.

    • Misuse Meta reporting systems to harass, intimidate or silence others.

  • Engaging in complex deception through the use of Meta Assets, including:

    • Inauthentic Distribution: Using a connected network of inauthentic Meta assets to increase the distribution of content, in order to mislead Meta or its users about the popularity of the content in question.
    • Inauthentic Audience Building: Using inauthentic Meta assets to increase the viewership or following of network assets, in order to mislead Meta or its users about the origin, ownership or purpose of an asset or assets.
    • Foreign Inauthentic Behavior: Foreign entities using Inauthentic Meta assets to falsely represent a domestic or local voice, in order to deceive an audience about the identity, purpose or origin of the entity they represent.
    • Inauthentic Engagement: Using a connected network of inauthentic Meta assets to deliver substantial quantities of fake engagement in ways designed to look authentic, in order to deceive Meta and its users about the popularity of content.
    • Substantially Similar Deceptions: Other substantially similar claimed or actual efforts by relatively sophisticated, connected networks of inauthentic Meta assets to deceive Meta or its users about the origin, popularity, or purpose of content.

For the following Community Standards, we require additional information and/or context to enforce:

We do not allow:

  • Entities to engage in, or claim to engage in Coordinated Inauthentic Behavior, defined as particularly sophisticated forms of Inauthentic Behavior where inauthentic accounts are central to the operation and operators:

    • Use adversarial methods to evade detection or appear authentic; and
    • Use a variety of adversarial and inauthentic techniques to achieve overarching strategic objectives; and
    • Primarily seek to manipulate public debate.
  • Entities to engage in, or claim to engage in Foreign Interference, defined as Coordinated Inauthentic Behavior where the network operators are not located in the same country as the audience the operation targets.

  • Entities to engage in, or claim to engage in Government Interference, defined as Coordinated Inauthentic Behavior where the operation is attributable to a government actor.

  • Governments that have instituted sustained blocks of social media to use their official departments, agencies, and embassies to deny the use of force or violent events in the context of an attack against the territorial integrity of another state in violation of Article 2(4) of the UN charter.

Read lessRead more

Misinformation

Policy Rationale

Misinformation is different from other types of speech addressed in our Community Standards because there is no way to articulate a comprehensive list of what is prohibited. With graphic violence or hate speech, for instance, our policies specify the speech we prohibit, and even persons who disagree with those policies can follow them. With misinformation, however, we cannot provide such a line. The world is changing constantly, and what is true one minute may not be true the next minute. People also have different levels of information about the world around them, and may believe something is true when it is not. A policy that simply prohibits “misinformation” would not provide useful notice to the people who use our services and would be unenforceable, as we don’t have perfect access to information.

Instead, our policies articulate different categories of misinformation and try to provide clear guidance about how we treat that speech when we see it. For each category, our approach reflects our attempt to balance our values of expression, safety, dignity, authenticity, and privacy.

We remove misinformation where it is likely to directly contribute to the risk of imminent physical harm. We also remove content that is likely to directly contribute to interference with the functioning of political processes. In determining what constitutes misinformation in these categories, we partner with independent experts who possess knowledge and expertise to assess the truth of the content and whether it is likely to directly contribute to the risk of imminent harm. This includes, for instance, partnering with human rights organizations with a presence on the ground in a country to determine the truth of a rumor about civil conflict.

For all other misinformation, we focus on reducing its prevalence or creating an environment that fosters a productive dialogue. We know that people often use misinformation in harmless ways, such as to exaggerate a point (“This team has the worst record in the history of the sport!”) or in humor or satire (“My husband just won Husband of the Year.”) They also may share their experience through stories that contain inaccuracies. In some cases, people share deeply-held personal opinions that others consider false or share information that they believe to be true but others consider incomplete or misleading.

Recognizing how common such speech is, we focus on slowing the spread of hoaxes and viral misinformation, and directing users to authoritative information. As part of that effort, we partner with third-party fact checking organizations to review and rate the accuracy of the most viral content on our platforms (see here to learn more about how our fact-checking program works). We also provide resources to increase media and digital literacy so people can decide what to read, trust, and share themselves. We require people to disclose, using our AI-disclosure tool, whenever they post organic content with photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. We may also add a label to certain digitally created or altered content that creates a particularly high risk of misleading people on a matter of public importance.

Finally, we prohibit content and behavior in other areas that often overlap with the spread of misinformation. For example, our Community Standards prohibit fake accounts, fraud, and coordinated inauthentic behavior.

As online and offline environments change and evolve, we will continue to evolve these policies. Accounts that repeatedly share the misinformation listed below may, in addition to having their content enforced on in accordance with this policy, receive decreased distribution, limitations on their ability to advertise, or be removed from our platforms. Additional information on what happens when Meta removes content can be found here.

Guidelines

Misinformation we remove:

We remove the following types of misinformation:

I. Physical Harm or Violence

We remove misinformation or unverifiable rumors that expert partners have determined are likely to directly contribute to a risk of imminent violence or physical harm to people. We define misinformation as content with a claim that is determined to be false by an authoritative third party. We define an unverifiable rumor as a claim whose source expert partners confirm is extremely hard or impossible to trace, for which authoritative sources are absent, where there is not enough specificity for the claim to be debunked, or where the claim is too incredulous or too irrational to be believed.

We know that sometimes misinformation that might appear benign could, in a specific context, contribute to a risk of offline harm, including threats of violence that could contribute to a heightened risk of death, serious injury, or other physical harm. We work with a global network of non-governmental organizations (NGOs), not-for-profit organizations, humanitarian organizations, and international organizations that have expertise in these local dynamics.

In countries experiencing a heightened risk of societal violence, we work proactively with local partners to understand which false claims may directly contribute to a risk of imminent physical harm. We then work to identify and remove content making those claims on our platform. For example, in consultation with local experts, we may remove out-of-context media falsely claiming to depict acts of violence, victims or perpetrators of violence, weapons, or military hardware.

II. Harmful Health Misinformation

We consult with leading health organizations to identify health misinformation likely to directly contribute to imminent harm to public health and safety. The harmful health misinformation that we remove includes the following:

  • Misinformation about vaccines. We remove misinformation primarily about vaccines when public health authorities conclude that the information is false and likely to directly contribute to imminent vaccine refusals. They include:

  • Vaccines cause autism (Ex: “Increased vaccinations are why so many kids have autism these days.”)

  • Vaccines cause Sudden Infant Death Syndrome (Ex: “Don’t you know vaccines cause SIDS?”

  • Vaccines cause the disease against which they are meant to protect, or cause the person receiving the vaccine to be more likely to get the disease (Ex: “Taking a vaccine actually makes you more likely to get the disease since there’s a strain of the disease inside. Beware!”)

  • Vaccines or their ingredients are deadly, toxic, poisonous, harmful, or dangerous (Ex: “Sure, you can take vaccines, if you don’t mind putting poison in your body.”)

  • Natural immunity is safer than vaccine-acquired immunity (Ex: “It’s safest to just get the disease rather than the vaccine.”)

  • It is dangerous to get several vaccines in a short period of time, even if that timing is medically recommended (Ex: “Never take more than one vaccine at the same time, that is dangerous - I don’t care what your doctor tells you!”)

  • Vaccines are not effective at preventing the disease against which they purport to protect. However, for the COVID-19, flu, and malaria vaccines, we do not remove claims that those vaccines are not effective in preventing someone from contracting those viruses. (Ex’s: Remove – “The polio vaccine doesn’t do anything to stop you from getting the disease”; Remove – “Vaccines actually don’t do anything to stop you from getting diseases”; Allow – “The vaccine doesn’t stop you from getting COVID-19, that’s why you still need to socially distance and wear a mask when you’re around others.”)

  • Acquiring measles cannot cause death (requires additional information and/or context) (Ex: “Don’t worry about whether you get measles, it can’t be fatal.”)

  • Vitamin C is as effective as vaccines in preventing diseases for which vaccines exist.

  • Misinformation about health during public health emergencies. We remove misinformation during public health emergencies when public health authorities conclude that the information is false and likely to directly contribute to the risk of imminent physical harm, including by contributing to the risk of individuals getting or spreading a harmful disease or refusing an associated vaccine. We identify public health emergencies in partnership with global and local health authorities.

  • Promoting or advocating for harmful miracle cures for health issues. These include treatments where the recommended application, in a health context, is likely to directly contribute to the risk of serious injury or death, and the treatment has no legitimate health use (ex: bleach, disinfectant, black salve, caustic soda).

III. Voter or Census Interference

In an effort to promote election and census integrity, we remove misinformation that is likely to directly contribute to a risk of interference with people’s ability to participate in those processes. This includes the following:

  • Misinformation about the dates, locations, times, and methods for voting, voter registration, or census participation.
  • Misinformation about who can vote, qualifications for voting, whether a vote will be counted, and what information or materials must be provided in order to vote.
  • Misinformation about whether a candidate is running or not.
  • Misinformation about who can participate in the census and what information or materials must be provided in order to participate.
  • Misinformation about government involvement in the census, including, where applicable, that an individual's census information will be shared with another (non-census) government agency.
  • False or unverified claims that the U.S. Immigration and Customs Enforcement (ICE) is at a voting location.
  • Explicit false claims that people will be infected by COVID-19 (or another communicable disease) if they participate in the voting process.
  • False claims about current conditions at a U.S. voting location that would make it impossible to vote, as verified by an election authority.

We have additional policies intended to cover calls for violence, the promotion of illegal participation, and calls for coordinated interference in elections, which are represented in other sections of our Community Standards.

For the following content, we include an informative label:

Manipulated Media

Media can be edited in a variety of ways. In many cases, these changes are benign, such as content being cropped or shortened for artistic reasons or music being added. In other cases, the manipulation is not apparent and could mislead.

  • Content Digitally Created or Altered that May Mislead. For content that does not otherwise violate the Community Standards, we may place an informative label on the face of content – or reject content submitted as an advertisement – when the content is a photorealistic image or video, or realistic sounding audio, that was digitally created or altered and creates a particularly high risk of materially deceiving the public on a matter of public importance.

Read lessRead more

Memorialization

Policy Rationale

When someone passes away, friends and family can request that we memorialize their accounts. Once memorialized, the word "Remembering" appears above the name on the person's profile so that the account is now a memorial site. Memorializing accounts helps create a space for remembering loved ones and protects against attempted logins and fraudulent activity. To respect the choices someone made while alive, we aim to preserve their account without changes after they pass away.

On Facebook, we have made it possible for people to identify a legacy contact to look after their account after they pass away. To support the bereaved, in some instances, we may remove or change certain content when the legacy contact or family members request it.

Requests to memorialize Facebook or Instagram accounts that belong to deceased users can be made with the requisite information by:

  • Facebook friends
  • Instagram followers
  • Family members with the correct documentation

For victims of murder and suicide we will remove the following content on Facebook and Instagram if it appears on the deceased’s profile photo, cover photo, or among recent timeline posts when requested by a family member of the deceased. We may also remove this content on Facebook when requested by the Facebook legacy contact:

  • Content related to the deceased's death
  • Praise or support for the death, disease, or harm of the deceased
  • Visual depiction of the object used in the deceased’s death.
  • Imagery of the convicted or alleged murderer of the deceased.
  • Relationship status or friend status of the convicted or alleged murderer of the deceased

For the following Community Standards, we require additional information and/or context to make the following changes when requested by an authorized representative of the deceased and on Facebook-only, by the legacy contact:

  • Remove violating comments on a memorialized profile, which would typically require the individual to self report so that we know that they are unwanted.
  • Change the deceased's individual's privacy settings from public to friends-only when there is harmful content on the profile
  • Change the name on the profile if it violates our Community Standards, in accordance with our Authentic Name policy
  • Add friends or followers to the profile if they were removed following the deceased’s passing

Read lessRead more

Third-Party Intellectual Property Infringement

Policy Rationale

Meta takes intellectual property rights seriously and is committed to protecting these rights while promoting expression, creativity, and innovation in a space built on community trust.

For this reason, we enforce a policy against posting content that violates someone else’s intellectual property rights, including copyright, trademark, or other legal rights. We publish information about the intellectual property reports we receive in our Intellectual Property Transparency Report.

To report content that you feel may infringe upon your intellectual property rights, please visit our Intellectual Property Help Center, visit our Business Protection page, or consider applying for access to Brand Rights Protection.

Upon receipt of a report from a rights holder or an authorized representative, we will remove or restrict content that engages in:

  • Copyright infringement.
  • Trademark infringement.
  • The sale or promotion of counterfeit goods
  • False affiliation with brand(s)
  • Any other infringement or violation of intellectual property rights or other proprietary rights

We also remove content that:

  • Contains signs that suggest the content is selling or promoting counterfeits of branded goods
  • Contains off-platform links to websites dedicated to the sale or promotion of suspected counterfeit goods
  • Sells or promotes suspected counterfeit goods that are identical or highly similar to content that has been previously reported as counterfeit by a rightsholder
  • Shares, promotes, or facilitates suspected copyright infringement

We remove accounts that:

  • Engage in repeated violations of this policy.

We allow content that Is authorized by the rights holder and follows established fair use principles

Read lessRead more

User Requests

Policy Rationale

Meta responds to requests for account removal in accordance with applicable law and our terms of service. Each and every request we receive is carefully reviewed and may reject or require additional clarification for certain requests.

We comply with requests for removal of:

  • Accounts when requested by the account owner
  • Accounts belonging to an incapacitated individual when requested by an authorized representative with Proof of Authority and medical documentation confirming incapacitation

Read lessRead more

Additional Protection of Minors

We comply with:

  • Requests for removal of an underage account.
  • Government requests for removal of non-sexual child abuse imagery.
  • Legal guardian requests for removal of attacks on unintentionally famous minors.

For the following Community Standards, we require additional information and/or context to enforce:

We may remove content created for the purpose of identifying a private minor if there may be a risk to the minor’s safety when requested by a user, government, law enforcement or external child safety experts.

Read lessRead more