A0195-Publication-Regulatory-Forecast-Email.jpg

Search NewsRoom

Advanced Search >

Media Contacts +

Crowell & Moring's Regulatory Forecast 2020

Advertising – Social Media Platforms Face Charges of Political Bias

Contributor: Christopher A. Cole.

February 26, 2020

[Article PDF]      

Key Points

  • Fair Play: Conservative concern about social media “censorship” hasn’t affected federal policy—yet.
  • Special Section: Politicians are targeting Section 230 of the Communications Decency Act, which protects platforms from liability for third-party content.
  • Trust Gap: Platforms that fail to weed out dubious content may scare away advertisers, publishers, or users.

Are Silicon Valley’s masters of social media biased against right-wingers? President Trump and his allies seem to think so. This past summer, the White House hosted a “Social Media Summit” at which conservatives and activists complained that big social media platforms had blocked their messaging.

Meanwhile, conservative Sen. Josh Hawley has introduced a bill that would effectively force large internet platforms to obtain federal certification that they lack a political bias. Though the bill had not gained traction as of late 2019, rumors have circulated that Trump is considering an executive order that would vest authority in the Federal Trade Commission or the Federal Communications Commission to police for bias.

None of the concerns about bias have yet led to concrete federal policies, but they add fuel to the arguments that Big Tech wields too much power, which in turn have spurred multiple antitrust investigations (see cover feature, here). This all could influence the platforms’ policies on political speech and advertising as the 2020 national election approaches. And they may ultimately influence whether advertisers, publishers, and the public view these platforms  as safe and trustworthy.

That’s Not “Censorship”

Complaints of bias by fringe political commentators are typically unfounded, says Christopher Cole, co-chair of the Advertising & Media Group at Crowell & Moring. “All of these platforms have terms of service or policies that clearly spell out what kind of speech is unacceptable, such as hate speech, racism, and threats,” Cole says. “Much of the most alarming rhetoric can be interpreted as violating these terms.”

These platforms aren’t government entities capable of “censorship” but are private companies that provide a service pursuant to a contract, Cole notes. They can’t let violent or obscene content run rampant or their users will flee. And it’s worth remembering, he says, that many of these commentators earn their living by pushing the envelope of acceptability in an attempt to gain attention. If they’re barred by a platform, it’s a signal that they’ve gotten enough exposure that someone has reported them.

Yet not every case of “de-platforming” is cut-and-dried. In a recent op-ed in The Washington Post, for example, a professor at the University of Utah argued that Facebook and Google were using “unclear and inconsistently applied advertising standards,” and that they rarely explained their decisions in public. And while banned commentators are free to move to another platform, the dominance of a handful of platforms raises the stakes when a publisher is banned from any of them.

Right-wing politicians have reacted to perceived bias by trying to chip away at Section 230 of the Communications Decency Act of 1996. The section specifies that internet service providers aren’t treated as the “publisher or speaker” of content provided by a third party, which shields them from liability for distributing controversial content. The Ending Support for Internet Censorship Act proposed by Sen. Hawley would require that a service provider received certification from the FTC that it doesn’t moderate content    in a way “that is biased against a political party, political candidate, or political viewpoint” before it could enjoy the benefits of Section 230.

The Hawley bill has made little headway so far. Federal authorities are loath to even try to determine what would qualify as a “political viewpoint” and how to objectively determine what counts as “bias” against such a viewpoint, Cole says. Yet right-wing concern about bias may make platforms more reluctant to police anything that might be construed as political speech, he adds.

Last fall, Facebook Chief Executive Mark Zuckerberg—citing free speech principles—declared that speech and advertising from politicians would be exempt from the company’s usual fact-checking process (though Facebook will require disclosure about sourcing). With the 2020 elections around the corner, Facebook and other platforms are already being flooded with political ads and speech, much of it dubious or demonstrably false, according to a report in The New York Times. For its part, Twitter has announced that it will no longer accept political advertising at all.

“It’s going to be a brutal election season,” Cole says. “Disinformation has become a political weapon like never before in our country. It’s almost accepted that politicians will lie, and people have lost the ability to discern truth from fiction.”

"Disinformation has become a political weapon like never before in our country. People have lost the ability to discern truth from fiction."

Christopher Cole


What a fire hose of disinformation will do for American democracy is worrying enough. What’s the consequence for the platforms themselves, and for the publishers and advertisers that rely on them?

Careful What You Host

Section 230 has given platforms considerable legal cover from the consequences of their users’ misdeeds. In one striking example, the California Supreme Court ruled in 2018 that the review site Yelp was not required to remove reviews that had been proven defamatory in court. (The U.S. Supreme Court refused to hear an  appeal.) And last summer, a federal appeals court cited Section 230 in rejecting a suit by American victims of Hamas attacks in Israel who claimed Facebook helped the terror group and its allies pursue their goals.

Yet the section is not bulletproof. In 2018, the president signed the Allow States and Victims to Fight Online Sex Trafficking Act, or FOSTA, which specifies that Section 230 won’t impair prosecution of anyone accused of violating federal sex trafficking laws or limit civil lawsuits. Previously, online classifieds sites had hidden behind the section when promoting or facilitating prostitution.

Nor does Section 230 protect platforms against the reputational harm of hosting a deluge of ugly or untrustworthy content. Advertisers tend to shy away from controversy, and companies that rely on programmatic advertising have sometimes been surprised to find their online ads appearing next to shocking, extreme, or offensive content. Campaigns by the activist group Sleeping Giants have had success convincing big advertisers to pull back advertising from controversial sites, and group members have said their real target is the big platforms that share in the profits from advertising on them.

In 2019, the chief marketing officer of consumer giant Procter & Gamble—one of the world’s largest advertisers—said the company would buy media only from channels where content quality is “known, controlled, and consistent with its values.” The executive, Marc Pritchard, added, “Every platform has a responsibility to control their content. None of us should have to worry that our brands end up anywhere near…horrible content.”

Spending on online advertising is still rising, but Pritchard’s remarks suggest that there are clouds on the horizon. Facing an opaque supply chain and systemic ad fraud, advertisers “are starting to wonder what they’re getting” for their advertising dollars, Cole says. Meanwhile, platforms are finding it difficult to balance their response to consumer privacy concerns with their targeting promises to advertisers.

More profoundly, failure to moderate content on big platforms could ultimately erode public trust in virtually all the media that’s consumed on them, Cole warns. As a lawyer who focuses on false-advertising litigation, Cole is already seeing research suggesting that some of the public is losing trust in even the factual claims that are a common subject of advertising litigation. What will happen to the effectiveness of advertising when the public can’t trust anything they see? For that matter, what about the effectiveness of journalism or any other content that publishers want readers to trust?

Social platforms are employing armies of moderators and increasingly sophisticated artificial intelligence algorithms to help ensure that users’ feeds are reasonably trustworthy and safe for viewing. But their own policies could undermine that costly effort, and political pressures could make things worse.

Christopher A. Cole
Partner – Washington, D.C.
Phone: +1 202.624.2701
Email: ccole@crowell.com

Index: Regulatory Forecast 2020