HANDOUT
Keeping Businesses and
People Safe on Facebook
01 Industry standards for brand safety
Brand safety is a challenge for the entire advertising industry. The world is increasingly connected, and
yet the increase in dangerous, hateful, disruptive and fake content online can threaten our global
community.
Facebook is one of the founders and members of the Global Alliance for Responsible Media (GARM), a
member of the Trustworthy Accountability Group (TAG) and a member of the Media Rating Council
(MRC), who work to make online platforms safer for businesses and people.
These organizations represent an industry-first effort to protect brand images and reputations from the
negative or damaging influence of questionable or inappropriate content when they advertise online.
This effort is essential to create a safer digital media environment that enriches society through
content, communications and commerce.
01 Resources
About Brand Safety on Facebook, Instagram, WhatsApp and Audience Network
HANDOUT
02 Community Standards
Our standards cover over 20 distinct areas and include violence and criminal behavior, safety, integrity
and authenticity and objectionable content.
Keeping people safe and informed
Currently, there are over 35K people who work on safety and security across the company.
Our policy team develops the standards for what is allowed on Facebook. They seek guidance from
experts in many professions, in particular government, academia and human rights. Based on their
feedback and the changing behavior we see on our platform, our standards evolve over time.
To apply our policies on a large-scale, we rely on our technical teams to build the AI and machine
learning classifiers that help proactively find content before anyone reports it, and we've made great
progress. In Q1 2020, for example, our systems proactively detected and we removed over 99% of the
content that depicted terrorism or graphic violence before before anyone in our community reported
the violation.
HANDOUT
How we review content
Handling hate speech
We don't allow hate speech on Facebook, and our Community Standards clearly outline this.
We define hate speech as an attack against a person or group of people based on protected
characteristics, which you can see listed:
We're also very clear about the types of attacks we prohibit, which fall into these three categories:
Calls to violence or dehumanizing speech
For example, people aren't permitted to say "Let’s go kill [a protected characteristic] people."
HANDOUT
Statements of contempt or inferiority
For example, people aren't permitted to say “This [protected characteristic] group of people are
stupid.”
Statements of exclusion or calls for segregation
For example, people aren’t permitted to say “[protected group of people] don’t belong here”
Misinformation
To prevent the spread of misinformation, we follow a three-part framework: r emove, reduce and inform.
First, we remove content that violates our Community Standards, including fake accounts.
Next, if fact-checking reveals something is false or partly false, we reduce its distribution. We also
reduce the distribution of content from Pages and groups that repeatedly share misinformation.
Lastly, we've introduced more prominent warning labels to inform people.
Community Standards Enforcement Report
In order to understand how we enforce our Community Standards we created The Community
Standards Enforcement Report. We used to release this report twice a year, but since August 2020 we
began a new quarterly cadence to ensure we (and others) hold our company accountable.
We’ve devoted a great deal of effort to devise a method of measurement and have consulted academic
experts in fields such as criminal justice. Two years ago, we began to publish the very same metrics we
use internally in our Community Standards Enforcement Report. We think it’s important to be
transparent in this way, so people can hold us accountable for our progress.
02 Resources
Publisher eligibility
Do not post hate speech
How Ads About Social Issues, Elections or Politics Are Reviewed (With Examples)
How We Review Community Content
Do not post content that encourages direct violence or criminality
Our Advertising Principles
HANDOUT
03 External organizations
Mark has said publicly that Facebook should avoid making important decisions about free expression
and safety unilaterally.
That’s why we created the Oversight Board. The board exercises independent judgment on some of the
most difficult and significant content decisions that we face. To create a board and select the members,
we sought input from both critics and supporters of Facebook and hosted a global consultation process
with more than 650 people in 88 different countries.
HANDOUT
The members reflect a wide range of views and experiences. They include former newspaper editors
from the UK and Indonesia, former judges from Hungary and Colombia, ex-government officials from
Israel and Taiwan and human rights advocates from Pakistan and West Africa. We expect them to make
some decisions that we won't always agree with, but that's the purpose of their existence. They're truly
autonomous in their exercise of independent judgment and will continue to play an increasingly
important role to set precedent and direction for content policy at Facebook.
US Civil Rights Audit
The Civil Rights Audit is another approach to accountability that Facebook has embraced, and the most
recent report was released in July. The audit began in May 2018 when we voluntarily agreed to be the
first social media company to undergo an audit of this nature.
Laura W. Murphy, a well-known and highly respected civil rights advocate, led the audit alongside the
notable civil rights law firm, Relman Colfax. Laura isn't a Facebook employee, but we gave her
unprecedented access to our teams, systems and processes.
As Laura wrote in the introduction to the report, the audit was “meaningful and effective, leading to real
changes at Facebook.”
03 Resources
Oversight board
Our civil rights audit
__________________________________________________________________________________________________
04 Safety controls for advertisers
Our Community Standards regulate the content individuals can share on Facebook and Instagram to
help ensure that only apps, websites and Facebook Pages that comply with our policies can be part of
our placements. However, in addition to our own back-end controls to prevent the appearance of
content that violates our policies, we offer controls to prevent your ads from running alongside certain
types of content that might not be suitable for your brand within Audience Network, Facebook Instant
Articles and Facebook in-stream video.
These controls are specific to advertisers. Each brand has a different approach and tolerance to what
they consider safe, so we offer controls to protect the reputation of each individual brand at their
discretion.
HANDOUT
Controls on feed
While we've heard the feedback and continue to listen, we currently believe that the best way to
contribute to brand safety in these environments is to prevent harmful content from appearing there in
the first place.
That said, we've seen over the past few weeks that the risk of someone screen-grabbing your brand
next to objectionable content, which we call "screenshot risk," is not zero. This is why we offer
placement controls that enable advertisers to opt out of placements they're not comfortable with. For
example, if you so choose, you can opt out of RHS (right-hand side) ad placement.
We understand that zero tolerance doesn’t mean zero occurrence. While we lead the industry, a bad
screenshot can still happen.
We continue to evaluate the topic though. And we're working to obtain more detailed feedback about
the systems and controls that might potentially work best for our advertisers and our community.
Monetization policies
To keep our platform safe, our P
artner Monetization Policies and Content Monetization Policies hold
publishers and creators accountable for their Pages and the content they post and monetize.
Partner policies
For publishers, creators and third-party providers to utilize monetization tools on Facebook, they must
comply with a set of rules called Partner Monetization Policies.
Content policies
The content found in features and products that help creators and publishers earn money also have to
follow certain rules. These are our high-level rules against sexual, violent, profane or hateful content.
However, content appropriate for Facebook in general is not necessarily appropriate for monetization.
Brand safety features in Business Manager
In Business Manager, you can find an interface called Brand Safety. In this interface, you can access the
links to review our brand safety policies, including Community Standards, partner monetization and
content monetization. You can also access the following features:
● Overview: T
his provides you a full view of your accounts and controls applied to them.
● Controls: This is where you can apply brand safety controls to your ad account. They're applied
to all existing and future campaigns. You can always add more restrictive controls to individual
campaigns in Ads Manager, but you can't make them less restrictive.
● Assets: In assets you have two options: block lists and publisher list.
HANDOUT
Brand safety controls
In the brand safety controls, you can make adjustments to:
Inventory filters: T
hese enable you to control the type of content that appears alongside your ads in
specific placements. On Facebook, filters exclude certain types of video and article content. On
Audience Network, filters also exclude certain types of apps.
Block lists: T
hese help ensure ads don't appear in places you don't consider safe for your brand or
campaign. They can include Audience Network websites and apps, Facebook in-stream videos,
Facebook Instant Articles and Instagram IGTV creators.
Next, for the in-stream video placement, topic exclusions e
nable you to prevent your ads from
appearing in on-demand videos about certain topics. While we apply brand safety controls as effectively
as possible, we can't guarantee that all content and publishers are compliant or aligned with your unique
brand safety standards.
For the in-stream video placement, content type extensions help ensure your ads can appear in partner
live streams. We automatically exclude live streams from government and spiritual partners. You can
also choose to exclude live streams from all partners.
Content allow lists give advertisers the ability to work with trusted Facebook Marketing Partners to
review and customize lists of videos that are suitable for in-stream campaigns. This lets you control
where your ads appear in a more precise way. To create content allow lists, contact your Facebook
Marketing Partner.
Levels of safety controls
Controls are available at business, ad account and campaign levels, so advertisers can choose the right
level to protect their campaigns.
CONTROLS AVAILABLE AT:
Placement opt-out Campaign level
Inventory filters Ad account and campaign levels
Block lists Business, ad account and campaign levels
Topic exclusions Ad account and campaign levels
Publisher allow lists Business and ad account levels
Content allow lists Ad account level
HANDOUT
Brand safety partners
Facebook has developed a scaled third-party brand safety ecosystem to help ensure that brand safety
controls and the tools we offer continue to serve the needs of advertisers.
Partners provide advertisers solutions to help them manage their brand safety. Currently, partners have
the capability to provide services on block lists and content allow lists in Facebook Business Manager.
Benefits of a scaled third-party brand safety ecosystem
Advertisers who use third-parties to help with their brand safety ecosystem benefit in several ways.
First, trust and credible brand safety controls. Partners provide independent, neutral measurement and
verification solutions to help ensure companies have consistent measurement across their digital
channels. They can also provide t rustworthy additional content review. Advertisers often maintain
long-term relationships with these partners, integrate the partner solutions into their own operations
and trust them with their brands.
Second, industry experience is valuable to advertisers. Third-party brand safety companies have
extensive knowledge in the field, so they can help scale brand safety among all members of the industry.
Next, scalable brand safety solutions, which has two aspects:
● Client service, support and consulting. Third-party brand safety companies have the capacity to
scale and provide greater client service as your needs grow. Likewise, as Facebook releases
more brand safety controls, partners can provide a higher level of service and
recommendations for which to use in concert with other products.
● Consistency: A
dvertisers have varying degrees of appropriateness that they're comfortable
with, and trusted third-parties offer choice and flexibility to ensure the consistency of brand
safety management across platforms.
Fourth is client influence. Partners service our mutual clients who spend on various platforms. It’s
therefore in the best interest of the partners to integrate and build neutral solutions across the
ecosystem.
Next is accountability. A third-party brand safety program offers a greater level of assurance for
agencies, advertisers and Facebook. This enables our advertisers to integrate with these partners and
puts the responsibility on the partner instead.
Then, there is the idea of a
lternatives. Our brand safety partner program offers advertisers an
alternative to managing their brand safety controls by themselves.
HANDOUT
Lastly, there's roadmapping. We work closely with our partners who are integrated with other
platforms, which gives us a sense of competitive and ecosystem developments where we lag behind.
This enables us to improve, so we can better serve our clients
Find brand safety marketing partners
Currently, Facebook has partnerships with four companies to provide customized brand safety controls.
They are:
● AIS (Integral Ad Science)
● OpenSlate
● DoubleVerify
● Zefr
All four companies provide services for block list control, and some also offer management services for
content allow lists.
We're releasing the content allow list feature gradually in specific regions only, so it may not be available
yet for you. To find marketing partners that offer services for brand safety, do the following:
1. Go to F
acebook for business.
2. Select F
ind a partner.
3. Scroll down the page and select Measurement.
4. In filters, select Solution Subtypes.
5. Lastly, select Brand Safety to see a list of available partners.
You can visit their profiles to access their contact information and learn more about their services,
including supported languages and countries.
04 Resources
About Brand Safety on Facebook, Instagram, WhatsApp and Audience Network
Why We Recommend Automatic Placements
About Your Business Settings in Business Manager
Monetization Policies
Partner Monetization Policies
Content Monetization Policies
Brand Safety controls
Block lists
About Topic Exclusions
About the Facebook Event Setup Tool for Web
About Delivery Reports
Review Delivery Reports
About Live Stream Exclusions
Apply Publisher Allow Lists
HANDOUT
Publisher allow lists
Learn more about publisher delivery reports
Facebook Marketing Partners
Facebook for business
__________________________________________________________________________________________________
05 Best practices
We strongly enforce our Community Standards for the content individuals share on our platforms, and
we ensure that only publishers that comply with our policies can show ads. We also strive to ensure our
targeting tools are as effective as possible to help you reach only your intended audience. Our brand
safety tools, such as block lists and inventory filters, are therefore optional.
Our ad delivery system works best when it has as many options for people and placements as possible.
When you restrict those options, it limits delivery. And that limitation could reduce the number of
people we can show them to and make your campaigns more expensive. It could also make it more
difficult to spend the whole campaign budget. Therefore, remember the following:
1. In-stream video or Instant Articles ads c
an appear within videos or articles publishers share, so
determine whether those ads are an issue.
2. Audience Network e
nables you to extend Facebook and Instagram campaigns to thousands of
high-quality apps, so consider if you want to use that placement.
3. Consider who your customers are and what content they might consider inappropriate. For
example, a brand that sells products for children might not want their ads to appear within
more adult-focused content. Also consider different policies, laws or sensitivities in the
countries you advertise in. Learn how to use inventory filters to exclude undesirable content.
4. Identify any Audience Network apps or Facebook Pages where you never want ads to appear.
For example, a news app might not want to give advertising revenue to rival news apps.
5. Consider if there are specific publishers where you want your ads to appear. A publisher allow
list is a list of Audience Network publishers you choose for your ads to appear on. Content allow
lists give advertisers the ability to work with trusted Facebook Marketing Partners to review
and customize lists of brand suitable videos for running in-stream campaigns on Facebook.
6. Consider potential catastrophes. If a tragic event occurs, do you want to prevent your ad from
appearing next to content about it? What types of events would make you want to pause an ad
campaign?
HANDOUT
05 Resources
Best Practices For Brand Safety
Best Practices For Using Manual Placements in Ads Manager
Branded Content That's Not Allowed In A Published Post