In December, we will start enforcing updates to the Twitter Rules announced last month to reduce hateful and abusive content on Twitter. Through our policy development process, we’ve taken a collaborative approach to develop and implement these changes, including working in close coordination with experts on our Trust and Safety Council.
New Rules on Violence and Physical Harm
Specific threats of violence or wishing for serious physical harm, death, or disease to an individual or group of people is in violation of our policies. Our new changes include more types of related content including:
Accounts that affiliate with organizations that use or promote violence against civilians to further their causes. Groups included in this policy will be those that identify as such or engage in activity — both on and off the platform — that promotes violence. This policy does not apply to military or government entities and we will consider exceptions for groups that are currently engaging in (or have engaged in) peaceful resolution.
Content that glorifies violence or the perpetrators of a violent act. This includes celebrating any violent act in a manner that may inspire others to replicate it or any violence where people were targeted because of their membership in a protected group. We will require offending Tweets to be removed and repeated violations will result in permanent suspension.
Expanding our Rules to Include Related Content
Our hateful conduct policy and rules against abusive behavior prohibit promoting violence against or directly attacking or threatening other people on the basis of their group characteristics, as well as engaging in abusive behavior that harasses, intimidates, or uses fear to silence another person’s voice. We are broadening these policies to include additional types of related content and conduct including:
Any account that abuses or threatens others through their profile information, including their username, display name, or profile bio. If an account’s profile information includes a violent threat or multiple slurs, epithets, racist or sexist tropes, incites fear, or reduces someone to less than human, it will be permanently suspended. We plan to develop internal tools to help us identify violating accounts to supplement user reports.
Hateful imagery will now be considered sensitive media under our media policy. We consider hateful imagery to be logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, or ethnicity/national origin. If this type of content appears in header or profile images, we will now accept profile-level reports and require account owners to remove any violating media.
Today, we are starting to enforce these policies across Twitter. In our efforts to be more aggressive here, we may make some mistakes and are working on a robust appeals process. We’ll evaluate and iterate on these changes in the coming days and weeks, and will keep you posted on progress along the way.