Trace Id is missing

Policies

Microsoft’s content and conduct policies help promote a positive and safe experience for users by explaining what behaviors and content are not allowed on our services.

Overview

Microsoft’s consumer online products, websites, and services have rules about what types of content and conduct are not allowed. The Microsoft Services Agreement has a Code of Conduct that explains what is not allowed and what to expect when accessing services like Xbox and Teams. Prohibited content and conduct are defined below. These rules apply regardless of whether content is created by users or by generative AI applications. When reviewing content and conduct and enforcing these policies, we carefully consider values such as privacy, freedom of speech, and access to information. We also make exceptions in limited circumstances when content that may otherwise violate our policies is important for newsgathering, education, science, research, art, or other societal values.

Additional guidelines

Additional policies and community standards may apply to some services.

Moderation and enforcement

If you break these rules, we may take a variety of actions (and this might differ by service).

Abuse of our Platform and Services

Do not misuse any of Microsoft's services. Do not use any Microsoft service to harm, degrade, or negatively affect the operations of our or others’ networks, services, or any other infrastructure.

Examples of violative material include:

  • Gaining or attempting to gain unauthorized access to any secure systems such as accounts, computer systems, networks, or any other services or infrastructure.
  • Deploying or attempting to deploy software or code of any kind on unauthorized systems that may negatively affect the operations of our or other networks, services, or any other infrastructure.  
  • Disrupting or attempting to disrupt Microsoft’s or others’ services or any other systems through any activities, including but not limited to denial-of-service attacks. 
  • Attempting to or successfully bypassing or circumventing access to, usage, or availability of the services (e.g., attempting to "jailbreak" an Al system or unauthorized scraping). This includes attempts to subvert enforcement placed on your account.

Bullying and Harassment

Microsoft seeks to create a safe and inclusive environment where you can engage with others and express yourself free from abuse. We do not allow content or conduct that targets a person or group with abusive behavior.

This includes any action that:

  • Harasses, intimidates, or threatens others.
  • Hurts people by insulting or belittling them.
  • Continues contact or interaction that is unwelcome, especially where contact causes others to fear injury.

Child Sexual Exploitation and Abuse

Microsoft is committed to protecting children from online harm. We do not allow the exploitation of children or any harm or threat of harm to children on our services. This includes banning the use of our services to further child sexual exploitation and abuse (CSEA). CSEA is any content or activity that harms or threatens to harm a child through exploitation, trafficking, extortion, endangerment, or sexualization. This includes creating or sharing visual media that contains sexual content that involves or sexualizes a child, or sharing links to applications or tools that do so. 

CSEA also includes grooming, which is the inappropriate interaction with children by contacting, private messaging, or talking with a child to ask for or offer sex or sexual content, sharing content that is sexually suggestive, or planning to meet with a child for sexual encounters. Other forms of exploitation are also prohibited, such as convincing a child to share sexual content and then threatening to share the content with others unless the child provides money or something else of value. A child is anyone under 18 years old.

When we become aware of content or conduct violating these policies, Microsoft reports it to the National Center for Missing and Exploited Children (NCMEC).

Coordination of Harm

Our products and services should never be used to hurt people, including to cause physical harm. Cooperating or making specific plans with others with the shared purpose of harming someone physically is not allowed.

Deceptive Generative AI Election Content

Microsoft seeks to foster a trustworthy information environment where voters are empowered with the information they need to vote for the candidates and issues of their choosing. Microsoft prohibits the creation or dissemination of deceptive generative AI election content. This includes AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates.

Exposure of Personal Information

Do not use Microsoft products and services to share personal or confidential information about a person without authorization.  

Prohibited activities may include sharing: 

  • Personal data such as location that may result in endangering someone else. 
  • Account username, passwords, or other information used for the purposes of account credentialing.  
  • Government-issued information such as Social Security Numbers or passport numbers. 
  • Private financial information including bank account numbers and credit card numbers, or any other information which facilitates fraudulent transactions or identity theft.  
  • Health information including healthcare records. 
  • Confidential employment records.

Graphic Violence and Human Gore

Real-world violent content can be disturbing, offensive, or even traumatic for users. We also understand that some violent or graphic images may be newsworthy or important for educational or research purposes, and we consider these factors when reviewing content and enforcing our policies.

We do not permit any visual content that promotes real-world violence or human gore.

This may include images or videos that show:

  • Real acts of serious physical harm or death against a person or group.
  • Violent domestic abuse against a real person or people.
  • Severe effects of violence or physical trauma, such as internal organs or tissues, burnt remains of a person, severed limbs, or beheading.

Hateful Conduct

Microsoft wants to create online spaces where everyone can participate and feel welcome.

We do not allow hateful conduct or content that attacks, insults, or degrades someone because of a protected trait, such as their race, ethnicity, gender, gender identity, sexual orientation, religion, disability status, or caste. In enforcing this policy, we work to ensure that people can use our services to document, research, or express opposition to hateful conduct.

Hateful conduct includes:

  • Encouraging or supporting violence against someone because of a protected trait.
  • Dehumanizing statements, such as comparing someone to an animal or other non-human, because of a protected trait.
  • Promoting harmful stereotypes about people because of a protected trait.
  • Calling for segregation, exclusion, or intimidation of people because of their protected trait.
  • Symbols, logos, or other images that are widely recognized as communicating hatred or racial superiority.

Intellectual Property Infringement

Microsoft respects the intellectual property rights of others, and we expect you to do the same. To the extent certain Microsoft features allow for the creation or upload of user-generated content, Microsoft does not allow posting, sharing, or sending any content that violates or infringes someone else’s copyrights, trademarks, or other intellectual property rights.

Non-Consensual Intimate Imagery and Intimate Extortion

Microsoft does not allow the sharing or creation of sexually intimate images of someone without their permission—also called non-consensual intimate imagery, or NCII. This includes photorealistic NCII content created or altered using technology, or sharing or promoting applications that use generative AI to create sexually intimate images of people without their consent. We do not allow NCII to be distributed on our services, nor do we allow any content that praises, supports, or requests NCII.

Additionally, Microsoft does not allow any threats to share or publish NCII—also called intimate extortion. This includes asking for or threatening a person to get money, images, or other valuable things in exchange for not making the NCII public.

Sexual Solicitation

Microsoft does not allow people to use its products and services to ask for or offer sex, sexual services, or sexual content in exchange for money or something else of value. 

Spam

Microsoft does not tolerate any form of spam on our platforms or services. Spam is any content that is excessively posted, repetitive, untargeted, unwanted or unsolicited. 

Examples of prohibited spam practices include:

  • Sending unsolicited messages to users or posting comments that are commercial, repetitive, or deceptive. 
  • Using titles, thumbnails, descriptions, or tags to mislead users into believing the content is about a different topic or category than it is.
  • Sending unwanted or unsolicited bulk email, postings, contact requests, SMS messages, instant messages, or similar electronic communications.
  • Using deceptive or abusive tactics to attempt to deceive or manipulate ranking or other algorithmic systems, including link spamming, social media schemes, cloaking, or keyword stuffing. 

Scams, Fraud, and Phishing

Microsoft does not tolerate any form of scams, fraud, phishing, or deceptive practices, including impersonation, on our platforms or services.

Scams, fraud, and phishing include any intentional act or omission designed to deceive others to generate personal or financial benefit. Additionally, phishing includes sending emails or other electronic communications to fraudulently or unlawfully induce recipients to reveal personal or sensitive information. 

Examples of scams, fraud and phishing include content that: 

  • Promises viewers a legitimate or relevant offer but instead redirects them somewhere different off site. 
  • Offers cash gifts, “get rich quick” schemes, pyramid schemes, or other fraudulent or illegal activities.  
  • Sells engagement metrics such as views, likes, comments, or any other metric on the platform. 
  • Uses false or misleading header information or deceptive subject lines. 
  • Fails to provide a valid physical postal address of the sender or a clear and conspicuous way to opt out of receiving future emails. 
  • Attempts to deceive users or audiences into visiting websites intended to facilitate the spread of harmful malware or spyware.  
  • Includes fake login screens or alert emails used to trick and steal personal information or account login details. 

Suicide and Self-Injury

We work to remove any content about suicide and self-harm that could be dangerous. We also strive to ensure that people may use our services to talk about mental health, share their stories, or join groups with others who have been affected by suicide or self-injury.

Prohibited content includes:

  • Supporting general ways people can end their lives, such as by firearm, hanging, or drug overdose.
  • Encouraging someone to take their life.
  • Showing images of real or attempted suicide.
  • Praising those who have died by suicide for taking their own life.

Self-injury content demonstrates, praises, or inspires physical harm to oneself, including through cutting, burning, or carving one’s skin. It also includes content that encourages or instructs on eating disorders or systematic over or under-eating.

Violent Extremism and Terrorism

At Microsoft, we recognize that we have an important role to play in preventing violent extremists, including terrorists and terrorist groups, from abusing online platforms. We do not allow content that promotes or glorifies violent extremists, helps them to recruit, or encourages or enables their activities. We look to the United Nations Security Council’s Consolidated List to identify terrorists or terrorist groups. Violent extremists include people who embrace an ideology of violence or violent hatred towards another group.

In addressing violent extremist content, we also work to ensure that people can use our services to talk about violent extremism or terrorism, share news or research about it, or express opposition to it.

Trafficking

Our services should never be used to exploit people, endanger them, or otherwise threaten their physical safety.

Microsoft does not allow any kind of human trafficking on our services. Trafficking happens when someone exploits someone else for personal gain by depriving them of their human rights.

Trafficking commonly includes three parts:

  1. Someone is moved, relocated, paid for, or abducted.
  2. Through the use or threat of force or coercion, or through lies or trickery.
  3. For money, status, or some other kind of gain.

Trafficking includes forcing people to work, marry, engage in sexual activity, or undergo medical treatments or operations without their consent and is not limited to any age or background.

Violent Threats, Incitement, and Glorification of Violence

Microsoft does not permit content that encourages violence against other people through violent threats or incitement.

  • Threats of violence are words that show a specific intention to cause someone serious physical harm. Slang or obviously exaggerated remarks usually don’t count as violent threats.
  • Incitement is material that calls for, provokes, or is likely to result in serious physical harm to a person or group.

We also do not allow the glorification of violence through content that celebrates or supports real acts of violence causing serious physical harm to people or groups, including violence that happened in the past.

Virus, Spyware, or Malware

Do not use any Microsoft products and services to host, run, transmit, or otherwise distribute harmful software such as viruses, spyware, and malware. Do not host, run, transmit, or otherwise distribute harmful software that damages or impairs the operation of Microsoft’s or third parties' networks, infrastructure, servers, or end-user devices.  

Examples of prohibited activities include:

  • Transmitting software to damage Microsoft’s or another party’s device. 
  • Embedding code in software to track and log users' activities. 
  • Downloading software without the consent of an end user or using fraudulent or misleading means to deceive users into clicking links, visiting websites, or downloading software.