Skip to main content
Everest Group names TTEC a ‘leader’ in its 2024 CXM Services PEAK Matrix Assessment Read the report

Why Companies Need Content Moderation

Why Companies Need Content Moderation

Brand perception heavily influences consumer decisions. Customers form opinions about brands when they read a news story or online review, hear about a friend’s experience, or visit the brand’s website. Simply put, brand perception matters.

So how can companies monitor and foster positive perceptions of their brand? It begins with content moderation. Moderators today face substantial challenges in making sure that user-generated content and ad placements meet the company’s standards. But human moderators can’t do the job alone. An effective content moderation strategy needs both human moderators and technology solutions.
 
Ads and other content populate the wild web
"Brand safety" has historically involved making sure a brand’s ads didn’t appear on pornographic or otherwise disreputable websites. However, the rise of programmatic media buying where ads are distributed across thousands of websites within milliseconds complicates the moderator’s task. It can be difficult even for advertisers to know where their ads may appear.

Programmatic ads are placed on sites based on factors such as audience demographics, not specific site buys. For example, a consumer may be tagged under “fashion” or “beauty” and will see these types of ads regardless of the site he or she is viewing. And the practice continues to grow. U.S. programmatic display ad spending is expected to reach $37.9 billion, or 82 percent of total spending on display ads in 2018, up from $25.2 billion in 2016, according to eMarketer.

These are some of the factors that caused brands to suddenly find themselves mired in controversy. Kellogg’s, for example, was hit with a social media storm when its ads were appearing on a website known for anti-Semitic, sexist, and racist articles. Kellogg’s announced that it was pulling all advertising from the website on the grounds that the site did not align with its values.
 
A significant part of the online criticism has been driven by a Twitter account called Sleeping Giants, which is encouraging people to tweet at brands whose ads appear on the contentious site. As of this writing, 818 brands have pulled ads from the site, according to Sleeping Giants.
 
Besides external pressure, advertisers expressed fears about inventory transparency even before the fake news and hate speech phenomenon gained attention. According to a survey from the Association of National Advertisers (ANA) and Forrester Consulting, 59 percent of U.S. marketers who responded said they felt “significantly challenged by the lack of inventory transparency” in 2016, up from 43 percent in 2014.
 
Tools such as whitelists, blacklists, and semantic technologies are designed to help advertisers filter out objectionable sites and placements. Whitelists allow advertisers to select the sites where they want their ads to appear, while blacklists allow advertisers to do the opposite—indicate the sites on which they don’t their ads to appear. Semantic technologies allow advertisers to prevent their ads from appearing on certain sites or next to certain types of content by filtering for language.
 
But research suggests that few marketers are actively using these safeguarding tools. Only 51 percent of U.S. marketers aggressively update blacklists, while 45 percent use whitelists, according to research from ANA and Forrester Consulting.
 
Even when these tools are used, they are not foolproof. Advertisers can’t just launch an automated solution and forget about it. They must regularly check their ad placements and the sites they appear on to make sure nothing slips through the cracks. Facebook’s algorithm, for instance, mistakenly blocked bsdetector.tech, the website for a popular browser extension that detects and tags fake news sources.

Companies need a comprehensive strategy that enables them to proactively weed out problematic content quickly and efficiently. Enter content moderation.