Is Fake News a Free Market Problem?

BY Cristina Fries

Fake News


Since the 2016 presidential election, the term “fake news” has resurfaced in the vocabulary describing our current media landscape. False stories about Donald Trump and Hillary Clinton spread across social media leading up to the election, with over 30 million shares of stories favoring Trump and 8 million shares of stories favoring Clinton on Facebook. Recently, these false stories have been found to have played a role in Russian probes that aimed to influence the outcome of the election in favor of Donald Trump.

While the term fake news has reemerged in light of its massive spread online, false stories in the media are not new. Since the dawn of print media, misleading headlines and erroneous reporting have to some extent undermined journalism’s role in disseminating factual, objective reporting, which is essential in keeping the powerful in check.  

But where is the line drawn between what is considered fake news and erroneous reporting? And what would one call partisan reporting, small presses, blogs, or other information platforms that emerge as a result of the free market and the internet? In many cases of biased, flawed, or subjective storytelling, fact and opinion are at odds.

However, the term fake news, as it is used today, can be defined as fictional news deliberately designed to deceive readers. It is not satire, and it is not erroneous reporting, which is done without the intent to deceive. Fake news is often financially motivated, as inflammatory headlines and false information is cheap to produce and easily attracts readers.

As the media landscape has tilted heavily toward online news consumption since the early 2000s, the environment for competition between legitimate news and false stories is ripe. During the months preceding the 2016 presidential election, the average consumer’s social media feed surfaced at least one or several fake news headlines. The spread of false stories was boosted by algorithms that helped some reach as many readers as CNN, Fox News or The New York Times.

In a free market economy, private parties such as Facebook, Google, or other websites can compete unrestrictedly without control by coercive central authorities. Within the free market, words and ideas, like home goods or school supplies, can be a type of product.

When ideas are fabricated and published deliberately to deceive, however, and the free market creates a landscape that aids its proliferation, the results could be dangerous in insidious ways. Fictitious ideas can cause people to be unaware of the true state of the world. When fed deceptive content, people can base important decisions on false information, such as their preference for a presidential candidate.

Fake news and the First Amendment

Stopping the spread of fake news has become a matter of global concern, but the methods used to stop it raise the question of how to do so without curbing free speech. Can what people write in false stories be defended under the First Amendment?

“A common misconception about the First Amendment right to free speech is that you may say anything at any time and it is protected from infringement by anyone,” said Megan Zavieh, an ethics lawyer who represents lawyers in ethics investigations. “This is simply not the law of the First Amendment.”

A key question, when thinking about a case of free speech, is whether the words in question create a “clear and present danger.”

For example, you may not falsely shout “fire!” in a crowded theater in order to incite unnecessary panic, and claim that you did so under the protection of the First Amendment, Zavieh explains. This concept stems back to the 1919 case of Schenck v. United States, in which Justice Oliver Wendell Holmes, Jr., ruled that the most stringent protections of free speech do not protect words that are used to incite falsehoods that pose large threats to freedom.

Holmes stated: “The question in every case is whether the words used are used in such circumstances and are of such a nature as to create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent.”

However, the First Amendment does not prohibit private parties from restricting speech on private property, and the Constitution only protects you from the government infringing on your free speech rights. “Any private news outlet or private party has constitutional freedom to create policies and guidelines to counteract false stories,” said Robert J. McWhirter, a criminal law specialist.

Private company policies

As companies whose users operate under their terms and conditions, Facebook and Google can restrict how people use their platform, including what they post on their websites or social profiles.  

For instance, Facebook’s terms and policies prohibit users from posting certain types of sensitive content, such as nudity, violence, graphic content or hate speech, and the company has the authority to remove such content or limit the audience who can view it.

Just as Facebook allows viewers to report content as inappropriate, it also allows viewers to flag false news stories. Independent, third-party fact-checkers review the flagged news stories and may mark them as “Disputed” if they find the stories to be false. Stories marked as “Disputed” will contain a link to an article that explains why it was flagged as such.

Google has also taken a defensive stance against the spread of fake news since the 2016 presidential election, announcing in November 2016 that the company will ban websites that publish fake news from using its online advertising service.

While Google found that only about 0.25 percent of its search results uncovered false news stories, the company found that this was sufficient to cause damage to the reliability of its search platform. Today, Google announced a new screening system (Project Owl) that aims to prevent untrue stories about people or events from emerging in search results. Project Owl aims to fight false news stories by revamping its algorithms as well as implementing a new feedback feature that allows users to report offensive or derogatory autocomplete suggestions in the search bar for a real human to review.

Ideally, the free market could fix the problem of fake news on its own. While private parties can regulate what people say, it could be dangerous, even unconstitutional, for state or federal legislation to create new restrictions on speech and the press.

Giving the government power to restrict fake stories would cast up the problem of defining what is considered “fake” to subjectivity and bias. Facts are in their nature subject to inaccuracies in reporting, and it would be impractical and impossible to demand that journalists only report incontrovertible facts. Historically, prohibition of “fake” news has been used as a tool to control the media and limit editorial freedom. Restrictions can cause the media and those who contribute to public debate to be silenced.

Free speech promotes public discussion, and restricts government regulation. Public discussion is a key element in upholding the tenets of democracy. It can be more dangerous for the state or federal government to punish speech than for it to leave private parties to sort out how to combat the problem of fake news on their own. Already, international media organizations, like the BBC, are setting up fact-checking initiatives and news literacy programs to help discredit false stories.

Perhaps the demand for stronger news literacy will in effect cause a demand for stronger reporting. In keeping with the First Amendment, private parties have an ever-growing responsibility to pave a stage for truth through self-regulation, education, awareness, to prevail over deception.

Cristina Fries

Cristina Fries is a contributing author at Bigger Law Firm Magazine. She has overseen marketing strategies for mid size and large law firms.


Lawyers used ChatGPT

Lawyers in New York Used ChatGPT and Now Face Possible Sanctions

Several lawyers are under scrutiny and face potential sanctions after utilizing OpenAI’s advanced language model, ChatGPT, for the drafting of legal documents submitted in a New York federal court. The attention surrounding this matter stems from the erroneous citation of non-existent or irrelevant cases by ChatGPT. The adoption of AI in legal practice is not…

Google Changes the Rules for AI Content

Google has Changed their Mind About AI Generated Content

Their change in terms essentially amounts to, “Yes, you can use AI tools to help create quality content but it had better be good.”

Law Firm Marketing Director

What Makes a Great Law Firm Marketing Director?

In an ever-changing legal landscape, an exceptional Law Firm Marketing Director stays ahead of the curve. They adopt a visionary perspective to navigate through intricate legal landscapes and drive the firm’s marketing initiatives. This involves identifying market trends, predicting client needs, and planning innovative marketing strategies to secure a competitive edge.