Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Communication. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 17 February 2025

Fake Newsfree

Fake Newsfree

  • Bente KalsnesBente KalsnesDepartment of Communication, Kristiania University College

Summary

Fake news is not new, but the American presidential election in 2016 placed the phenomenon squarely onto the international agenda. Manipulation, disinformation, falseness, rumors, conspiracy theories—actions and behaviors that are frequently associated with the term—have existed as long as humans have communicated. Nevertheless, new communication technologies have allowed for new ways to produce, distribute, and consume fake news, which makes it harder to differentiate what information to trust. Fake news has typically been studied along four lines: Characterization, creation, circulation, and countering. How to characterize fake news has been a major concern in the research literature, as the definition of the term is disputed. By differentiating between intention and facticity, researchers have attempted to study different types of false information. Creation concerns the production of fake news, often produced with either a financial, political, or social motivation. The circulation of fake news refers to the different ways false information has been disseminated and amplified, often through communication technologies such as social media and search engines. Lastly, countering fake news addresses the multitude of approaches to detect and combat fake news on different levels, from legal, financial, and technical aspects to individuals’ media and information literacy and new fact-checking services.

Subjects

  • Journalism Studies
  • Media and Communication Policy

The Concerns Over Fake News

Fake news was named the word of the year in 2017 by the Collins Dictionary. In 2017, the usage of the term had increased by 365% since 2016 (Collins Dictionary, 2017). The American presidential election in 2016 put the phenomenon on the international agenda. Websites with fabricated content gained massive attention, such as the story that falsely claimed that the Pope endorsed the republican candidate Donald Trump (Ritchie, 2016). Shortly after, President Donald Trump politicized the term and used it to discredit established media outlets. But even though the term seems fairly new, the phenomena it covers are old. Manipulation, disinformation, falseness, rumors, conspiracy theories—actions and behaviors which are frequently associated with the term—have existed as long as humans have communicated. The novelty of the term in this context relates to how false or misleading information is produced, distributed and consumed through digital communication technology. Additionally, new communication technologies have made it easier to manipulate the news format, thus simultaneously benefitting and undermining news media’s credibility.

The challenges that lies, propaganda, and fake news represent for open societies have been recognized for several decades. In a column published for The Atlantic in 1919, Walter Lippmann set out a comprehensive view of the problems that propaganda posed for modern Western society (Sproule, 1997, p. 12). Lippmann argued that the basic problem of democracy was to protect news—the source of public opinion—from the taint of propaganda, and no modern society lacking the wherewithal to detect lies could call itself free (Lippmann, 1922). Without reliable information, it will be hard for democracies to function. Fake news and disinformation are symbols of a larger societal problem: the manipulation of public opinion to affect the real world (Gu, Kropotov, & Yarochkin, 2017). But even though disinformation is a historic phenomenon, each new communication technology allows for new ways to manipulate and amplify disinformation to people and societies. Following novel digital communication technologies requires new ways to tackle the challenges compared to earlier communication technologies.

False information dressed as news has created serious concerns in many countries. Some researchers have called it information pollution (Wardle & Hossein, 2017), media manipulation (Warwick & Lewis, 2017), or information warfare (Khaldarova & Pantti, 2016). A common concern is a strong anxiety that false information is polluting the public sphere and damaging democracy. As argued by Warwick and Lewis, “media manipulation may contribute to decreased trust of mainstream media, increased misinformation, and further radicalization” (2017, p. 1). Even when disinformation has been revealed and debunked, it may continue to shape people’s attitudes (Thorson, 2016).

Additionally, politicians and other powerful actors have appropriated the term to characterize media coverage they do not like. Most famously, the American President Donald Trump has numerous times labeled media outlets such as CNN or The New York Times as fake news. As reported by the New York Times, in countries where press freedom is restricted or under considerable threat—such as Russia, China, Turkey, Libya, Poland, Hungary, Thailand, Somalia and others—political leaders have invoked fake news as justification for beating back media scrutiny (Erlanger, 2017). By suggesting that news cannot be trusted and by labelling it fake news, politicians deliberately undermine trust in journalism and news outlets, one of the core institutions in democratic nations based on free speech and a free press.

The concerns over fake news are plentiful, and of necessity, this article is limited to describing research on fake news from a media perspective. Different types of media manipulation exist, and are studied within psychology, political communication, warfare, marketing, and information technology, to mention a few areas. For practical reasons, it has not been possible to cover them all here. Along the same lines, fake news addresses several specific issues within media and journalism studies, such as objectivity, bias, and journalistic authority, but this article will not engage extensively in these important and widespread debates here.

Previous research has identified different frameworks for studying fake news, some of which will be addressed in the following sections. These include categories such as types, elements and phases (Wardle & Hossein, 2017); process, product, and public (Westlund, 2017); and six types of fake news (Tandoc, Lim, & Ling, 2018). Building on the aforementioned frameworks, this article will examine four core terms in relation to fake news: How fake news has been characterized, created, circulated and countered.

Characterization: The Origin of the Term and Definition

The term fake news has roots back to the 1890s (Merriam-Webster Dictionary, 2018). For more than a century, it has been used to indicate falsehood printed as news. The Merriam-Webster Dictionary cites newspapers such as The Cincinnati Commercial Tribune, The Kearney Daily Hub, and The Buffalo Commercial that all used the term fake news in articles from 1890 and 1891 in connection with false information. But the phenomenon appeared even earlier, and historian Jacob Soll (2016) traces its origin back to Johannes Gutenberg’s invention of the printing press in 1439. He explains that as printing expanded, so did fake news, appearing as spectacular stories of sea monsters and witches or claims that sinners were responsible for natural disasters. All this began to spread after Gutenberg. “Real” news was hard to verify in that era, even though there were many news sources. The concept of journalistic ethics or objectivity was still not developed.

Fake stories have historically been produced to sell newspapers (e.g., The Great Moon Hoax about observation by astronomers of the bizarre life on the moon, published by The New York Sun, 1835), to entertain (e.g., The War of the Worlds, Orson Welles’ radio adaptation in 1938 of H. G. Wells’ drama from 1898), or to create fear and anger (e.g., the so-called “blood libel” story from Trent, Italy, which claimed that the Jewish community had murdered a two-and-a-half-year-old Christian boy, 1475) (Umberti, 2016).

These accounts give an indication of how the historic evolution of fake news is also related to the development of journalism as a profession, such as methods of verification and codes of ethics. They also indicate that fake news is not a new thing, neither as a term nor as a phenomenon. But the surge in the use of the term worldwide has created epistemological discussions of how digital disinformation dressed as news should be understood.

Even though the term has deep historical roots, newer definitions of fake news have been suggested in recent years to better reflect the challenges posed by new communication technologies. Recently, the term has been used to describe a wide range of disinformation, misinformation, and malinformation (Wardle & Hossein, 2017), ranging from lies, conspiracy theories, and propaganda to mistakes and entertainment. As stated by Marwick and Lewis, “fake news is a contested term, but generally refers to a wide range of disinformation and misinformation circulating online and in the media” (2017, p. 44). This section will outline three elements frequently found in the definitions of fake news: the news format (false information masqueraded as news), the degree of falsity (partly or completely false information), and the intention behind it (to mislead readers and users for political or economic purposes).

The existing definitions are preoccupied with the format, but the main concern with disinformation is the quality of the information, or rather lack thereof. The news format is just one of many ways to get false or misleading information to spread online. Nevertheless, the news format has been a recurring theme in the current definitions, such as here: fake news is false stories that appear to be news, spread on the Internet or using other media, usually created to influence political views or as a joke (Cambridge Dictionary, 2018). Similarly, Allcott and Gentzkow have defined fake news as “news articles that are intentionally and verifiably false, and could mislead readers” (2017, p. 13). Tandoc, Lim, and Ling have examined the existing literature and identified six types of fake news described in the research literature, and two of them, news satire and news parody are particularly related to the news format. The four other types are news fabrication, photo manipulation, advertising and public relations, and propaganda (Tandoc, Lim, & Ling, 2018). News satire is described as mock news programs, which typically use humor or exaggeration to present audiences with news updates. Differing from a typical news broadcast, news satire promotes itself as delivering entertainment first and foremost, rather than information, and the hosts are calling themselves comedians, not journalists. The satire in the Daily Show with John Stewart and the (Stephen) Colbert Report, both American, is based on actual events, but the news format is fake.

Similar to news satire, news parody is mimicking news stories in humorous ways, but the main difference is that the news stories are entirely fictional, such as in the American mock news site The Onion. News parody and satire can serve as watchdogs, both of the political establishment, but also of news media. News parody and news satire are also similar in the sense that both the author and the reader or viewer share the joke.

Nevertheless, there are many problems with the term fake news, and one of them is the close connection to news as a format and as an independent institution. The European Union (EU) report from the independent High Level Expert Group on fake news and online disinformation suggests abandoning the term fake news altogether (HLEG, 2018). As the term is inadequate and misleading to explain the complexity of the situation, the report rather suggests using the term disinformation, which can be defined as “false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit” (2018, p. 10). Disinformation is clearly a more precise term to use to discuss false or misleading information, without alluding to the news format or the institution of news. But in order to more correctly reflect how this phenomenon has been covered in the research literature, this article will use both the terms fake news and disinformation.

Continuing with the intention behind fake news, political or economic motives are often mentioned as typical intentions. Reuters Institute for the Study of Journalism defines fake news as “false information knowingly circulated with specific strategic intent—either political or commercial. Such content typically masquerades as legitimate news reports while trafficking in conspiracy theories or other matters laden with emotional appeals that confirm existing beliefs” (RISJ, 2017). Along the same line, Silverman describes fake news as “completely false information that was created for financial gain” (Silverman, 2016), such as sensational clickbait articles attempting to lure readers to click and share. But also advertising is mentioned in the research literature as a type of fake news. Advertising and public relations refers to how advertising materials in the guise of genuine news reports are published as news. Fake news in this form is defined as when public relations practitioners adopt the practices or appearance of journalists in order to insert marketing or other persuasive messages into news media (Nelson & Park, 2015), for example as native advertising. Native advertising looks like news articles, but is paid by a sponsor. The obscuration of its origins may mislead audiences into believing that the news produced is entirely free of bias. The clear emphasis on financial gain is a distinction with regard to public relations or advertising-related fake news versus other types of fake news. This content is often based on facts, albeit in an incomplete set, often concentrating on the positive aspects of the product or company being advertised and takes advantage of the legitimacy of the news format.

When news stories are created by a political entity to influence or mislead public perception, it is often described as propaganda. The overt purpose is to benefit a public figure, organization or government. There can be an overlap between propaganda and advertising. Similar to advertising, propaganda is often based on facts, but includes bias to promote a particular site or perspective. The goal is to persuade, rather than to inform, and, differing from advertising, the emphasis is not on financial gain, but on political influence, such as the Russian Channel One which is found to have published factually untrue news stories to influence public perception of Russia’s actions (Khaldarov & Pantti, 2016).

Lastly, the degree of falsity or fakeness is included in the definition by Benkler, Faris, Roberts, and Zuckerman: “Rather than ‘fake news’ in the sense of wholly fabricated falsities, many of the most-shared stories can more accurately be understood as disinformation: the purposeful construction of true or partly true bits of information into a message that is, at its core, misleading” (2017, p. 2). News fabrication is an example of articles that have no factual basis but are published in the style of news articles to create legitimacy. It closely mimics legacy news, and the producer of the items often has the intention to deceive, either for political or financial reasons. Once the reader suspends credulity and accepts the legitimacy of the source, they are more likely to trust the item and not seek verification. This category has a similarity with news parody, except that the reader does not have the implicit agreement with the author that the “news” item is false. Similarly for visual content, photo manipulation of real images or videos creates a false narrative. Typically, fake news appearing in photo manipulations features pictures from one context used in another context, such as the circulation of manipulated photos that were circulated on Twitter during Hurricane Sandy, which hit the United States in March 2012. While the three previous categories referred to text-based items, this category describes visual news.

Building on these existing definitions, this article proposes to define fake news as “complete or partly false information, (often) appearing as news, and typically expressed as textual, visual or graphical content with an intention to mislead or confuse users.” The facticity and the intention to deceive are often used in the research literature to differentiate between different types of disinformation types, here presented as a typology of fake news to summarize research on fake news from 2003 to 2017 (Tandoc, Lim, & Ling, 2018, see also Hedman [2018] for a similar typology of fake news). (See Table 1.)

Table 1. Typology of Fake News

Author’s Immediate Intention to Deceive

Author’s Immediate Intention to Deceive

Level of facticity

High

Low

High

Native advertising

News satire

Propaganda

Manipulation

Low

Fabrication

News parody

Source: Tandoc, Lim, and Ling (2018).

This attempt to classify different information types is useful, but not sufficient to properly address the concerns over fake news. For the rest of the article, the types of fake news related to misinformation—exemplified in the previous discussion as news satire and news parody where the inaccuracy is unintentional (Jack, 2017) or not created with the intention of causing harm (Wardle & Hossein, 2017; HLEG, 2018)—will not be further discussed.

Likewise, journalistic mistakes will neither be included in the ensuing description of fake news. Journalists do make mistakes, and news might occasionally be biased, exaggerated, inaccurate, and sensationalist. But journalistic errors are not fake news, they are mistakes, created without intention to mislead or harm. Furthermore, journalists make choices for what stories to write, which sources to interview, which angles to use—choices that might come across as controversial. As Schudson wrote “we didn’t say journalists fake the news, we said journalists make the news” (1989, p. 263, my italic). News is a constructed reality possessing its own internal validity (Tuchman, 1976, p. 97), but at the same time, journalism is regulated by ethical codes of conduct, requiring journalists to seek the truth and report on it.

Thus, in the following discussion, this chapter will mainly focus on fake news understood as disinformation: “false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit” (HLEG, 2018). The following section will address the creators of fake news.

Creation: The Production of Fake News

Identifying the creators of disinformation and fake news, as well as their motivation, has been a major concern in recent media reports and academic articles about the phenomenon. Often it can be unclear who has produced the disinformation, but actors ranging from governments, organizations, companies, and individuals have been identified as creators of fake news. Some push fake news to make money, others do it to spread their world views, while trolls do it for the fun of it. It has been suggested that understanding the motivation behind fake news is crucial to combat it. Identifying the motivation or intention behind the fake news production is thus of crucial importance. Typically, three main motivations have been identified: Political, financial, and social (Wardle & Hossein, 2017, p. 26; Warwick & Lewis, 2017, p. 27). In the following section, creators of fake news and different actors and their motivations will be characterized.

Political

A political or ideological motivation has been identified in heavily circulated fake news stories recently (Woolley & Guilbeault, 2017). Political disinformation is often called propaganda, and political actors who produce disinformation masqueraded as news intend to influence public perception, either on specific issues, individuals, or perceptions of the world. The Latin origin of propaganda is “to propagate” or “to sow,” and the historic meaning of the word was fairly neutral—to disseminate or promote particular ideas (Jowett & O’Donnell, 2012). But the term lost its neutral meaning in 1622 when the Roman Catholic Church used the term to propagate the Catholic faith and oppose Protestantism. Propaganda is defined as “the deliberate, systemic attempt to shape perceptions, manipulate cognitions, and direct behavior to achieve a response that furthers the desired intent of the propagandist” (Jowett & O’Donnell, 2012, p. 7), and propaganda can thus be understood as a deliberate attempt to alter or maintain a power balance, advantageous to the propagandist. To identify a message as propaganda is to suggest something negative and dishonest, and synonyms to the word propaganda are therefore lies, distortion, deceit, manipulation, mind control, psychological warfare, and brainwashing (Jowett & O’Donnell, 2012).

Political propaganda can also be called strategic narratives, which can be seen as a tool for political actors to articulate a position on a specific issue and to shape perceptions and actions (Roselle, Miskimmon, & O’Loughlin, 2014). Information operations is a similar term used by Facebook to describe actions taken by organized actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome (Weedon, Nuland, & Stamos, 2017). These operations can use a combination of methods, according to Facebook, such as false news, disinformation, or networks of fake accounts aimed at manipulating public opinion.

Russia and China have been described as particularly active governmental actors producing and spreading political disinformation (Wardle & Hossein, 2017; Khaldarova & Pantti, 2016; Soldatov & Borogan, 2015). But these are not the only governmental actors applying fake news and disinformation. The use of fake news, automated bot accounts, and other manipulation methods gained particular attention in the United States in 2016, but manipulation and disinformation tactics played an important role in elections in at least 17 other countries over the past year, among them Venezuela, the Philippines and Turkey, according to the Freedom of the Net report (2017). But also European countries are applying digital operations to influence perception or persuade specific individuals. In the United Kingdom, cyber troops have been known to create and upload YouTube videos that “contain persuasive messages” under online aliases, and this content creation amounts to more than just a comment on a blog or social media feed, but instead includes the creation of content such as blog posts, YouTube videos, fake news stories, pictures, or memes that help promote the government’s political agenda (Benedictus, 2016; Bradshaw & Howard, 2017). In an American study about fake news’ agenda-setting power, Vargo, Guo, and Amazeen found that partisan media are intricately entwined with fake news (2017). During the three years studied (2014–2016), partisan media seemed particularly attentive to fake news coverage on topics such as border issues, international relations, and religion (Vargo, Guo, & Amazeen, 2017, p. 16). Political disinformation is of huge concern due to the challenges it poses for societies. New sophisticated technologies to produce and distribute political disinformation make it harder to detect and combat the manipulations, both for journalists, fact checkers, and citizens, but also for civil society and established democratic institutions.

The Freedom of the Net report found that the number of governments attempting to control online discussions in this manner has risen each year since Freedom House began systematically tracking the phenomenon in 2009. But over the last few years, the practice has become significantly more widespread and technically sophisticated, with “bots, propaganda producers, and fake news outlets exploiting social media and search algorithms to ensure high visibility and seamless integration with trusted content” (Kelly, 2017, p. 4).

The European Union considers the Russian disinformation efforts so aggressive that they created a website in 2015, EU vs. Disinformation, “to better forecast, address and respond to pro-Kremlin disinformation” (EU vs. Disinfo, 2018). The EU has expressed clear concerns about increasing disinformation and propaganda activities from Russia, where the purpose is to maintain or increase Russia’s influence to weaken and split the EU. Similarly, the Ukrainian crowdsourcing site StopFake was launched in 2014 to fight disinformation emanating from Russian media and other actors on the Internet (Khaldarov & Pantti, 2016). Often, intermediaries operate on behalf of governmental actors, as in the case of the Russian television channel, Channel One, which is known as a proxy for Russian strategic narrative (Khaldarov & Pantti, 2016, p. 293). Political disinformation efforts like these are typically used to sow mistrust and confusion about what sources of information are authentic, making people confused about what and whom to believe in. In the long run it can diminish trust in central institutions, such as news media.

Financial

Fake news produced for financial gain might seem like a relative new dimension of disinformation (Tandoc Jr., Lim, & Ling, 2018), but it has historic similarities with what is called yellow journalism. Yellow journalism is often associated with misconduct of newsgathering (Campbell, 2001), and the term is used to describe the circulation war between Joseph Pulitzer’s New York World and William Randolph Hearst’s New York Journal. Both papers were accused by critics of sensationalizing the news, to create war hysteria and stimulate morbid curiosity in murders, seductions, drunkenness, and immorality in order to drive up circulation (Campbell, 2001, p. 40). False, sensationalist content is creating attention and curiosity in the 21st century as well, but this time measured through web traffic and social shares.

One of the most infamous recent examples of fake news produced for financial gain involves the teenagers from a town in Veles, Macedonia, who churned out sensationalist stories about the American presidential candidates in 2016 to earn cash from advertising (Kirby, 2016; Subramanian, 2017). They were seeking money rather than political influence, and figured out that publishing pro-Trump content generated more advertising revenue than pro-Clinton content (Warwick & Lewis, 2017, p. 31). This observation was confirmed in a study of the most shared news stories during the same election campaign, which showed that false stories, outperformed real news stories on Facebook, indeed three of the most shared false election stories were overtly pro-Donald Trump or anti-Hillary Clinton (Silverman, 2016). Creators of fake news motivated by financial opportunities have diverse backgrounds and are ranging from the teenagers in Macedonia, start-ups in the Philippines, a 38-year man old from Arizona, and Russian troll armies, to mention a few (Caron, 2017; Hern, Duncan, & Bengtsson, 2017).

Factually inaccurate, often deceptive content produced by people seeking money is also encouraged by the algorithms on social media platforms. A post with many likes, shares, or comments is more likely to be further liked, shared, and commented on, as popularity on social media is a self-fulfilling cycle, one that lends well to the propagation of unverified information (Tandoc, Lim, & Ling, 2018; Thorson, 2016). Facebook is in the business of letting people share things they are interested in, and their business model relies on people clicking, sharing, and engaging with content—regardless of veracity (Solon, 2016).

Social

Social needs might also be motivations for producing fake news and disinformation, such as status, attention, identity building, or entertainment. Actors may create and share disinformation to gain acceptance within online communities or to earn fame (Wardle & Hossein, 2017, p. 36). Social media users are incentivized through likes, shares, and comments to “create content that will resonate with their friends, followers and groups,” and media manipulation might be a way to gain statues and express identity (Warwick & Lewis, 2017, p. 31). Warwick and Lewis have examined how disinformation from American far-right communities such as the so-called alt-right can express insights into these communities’ shared identity: “Taken as a whole, these communities may feel that by manipulating media outlets, they gain some status and a measure of control over an entrenched and powerful institution, which many of them distrust and dislike” (Warwick & Lewis, 2017, p. 31). The expression “I did it for the lulz” indicates that trolling Internet users might post racist or sexist content, but claim to do so merely as a way to generate lulz through the offense of others.

Applying an approach in which news reading is seen as a ritualistic and dramatic act can make it easier to understand why certain types of disinformation is consumed and shared. Wardle and Hossein argue that we should understand communication as a ritual along the lines of James Carey (1989), rather than the more traditional understanding of communication as transmission of information. According to Carey, communication is not the act of imparting information, but rather the representation of shared beliefs—communication draws people together in fellowship and commonality. Thus, news reading and writing is a ritualistic and dramatic act where a particular view of the world is portrayed and confirmed. By producing and sharing fake news (with a particular slant), users are connecting with other users. And digital and social media allow for effective tools to amplify fake news to large networks worldwide, which brings us to the next section.

Circulation and Distribution of Fake News

Social media have proved effective distribution channels for false information (Warwick & Lewis, 2017). Studies have shown that fake news stories were more shared on social media than articles from edited news media (Silverman, 2016). The power of fake news and disinformation lies in how well it can penetrate social spheres. Two aspects are crucial to comprehend the circulation of false information: Technology and trust. In the following section, we will look into how technology and trust (or rather distrust) impact how disinformation spreads.

Technology

Social and digital communication technologies such as social networks, blogs, and wikis are powerful tools for users to publish, distribute, and consume information—decentralized compared to previous mass media technologies. It thus seems easier for false or misleading information to enter the public sphere in many countries through digital, social media. The democratization of online content production has greatly diminished the news media’s traditional grip of information (Nielsen, 2017). To reach a global audience through digital media is possible. While editors and publishers were the main gatekeepers of information in the time of mass media, tech platforms and algorithms are the new gatekeepers (Lewis, 2018). Particularly, Facebook has a unique role in amplifying information. Social media, especially Facebook, has become an important entrance point for news in many countries: more than half of online users or 54%, across 36 countries say they use social media as a source of news each week (Newman, Fletcher, Kalogeropoulos, Levy, & Nielsen, 2017, p. 10). Furthermore, in a study that examined the exposure to misinformation during the American election campaign in 2016, the researchers found that Facebook was a key vector of exposure to fake news (Guess, Nyhan, & Reifler, 2018).

Secondly, the technological development has democratized the production of fake news. When supposedly everyone can publish (false) information that looks like news, and spread it to large groups of people online, it becomes harder to differentiate between false and trusted information. Fake accounts and pages on social media are blurring the conceptualization of information sources (Tandoc, Lim, & Ling, 2018, p. 3). Digital amplifiers such as bots (i.e., automated Twitter accounts) have been used to give an illusion that fake or partly fake stories are widely circulated. A study that analyzed 14 million Twitter messages with 400,000 claims, found evidence of how social bots played a disproportionate role in spreading and repeating misinformation during the American election (Shao, Ciampaglia, Varol, Flammini, & Menczer, 2017). Bots target users with many followers through replies and mentions, and may disguise their geographic locations. Studies have also shown how Facebook’s algorithm periodically trends fake news after the company fired its human editors (Dewey, 2016).

Facebook initially refused to accept that the platform had a role in spreading fake news during the American election, and founder Mark Zuckerberg said that is was a “pretty crazy idea” if fake news on Facebook had influenced the election in any way (Zuckerberg, 2016a). For years, Zuckerberg has insisted that Facebook is a technology company, not a media company with all the editorial responsibilities it entails. But eventually, he admitted in December 2016 that Facebook had a greater responsibility to the public than just being a tech company. Zuckerberg (2016b) wrote on his Facebook page: “While we don’t write the news stories you read and share, we also recognize we’re more than just a distributor of news. We’re a new kind of platform for public discourse—and that means we have a new kind of responsibility to enable people to have the most meaningful conversations, and to build a space where people can be informed.” Facebook has also realized that the platform can be misused by foreign powers through “information operations,” as Facebook labeled it in published report (Weedon, Nuland, & Stamos, 2017). While traditional security on Facebook has focused on abusive behavior, such as account hacking, malware, spam, and financial scams, Facebook has increasingly expanded the focus to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people.

Similarly, YouTube’s recommendation algorithm has been accused of promoting conspiracy theories and for fueling disinformation during the 2016 American election (Lewis, 2018). YouTube is described as one of the largest and most sophisticated industrial recommendation systems in existence, but when studying YouTube’s recommended videos, Guillaume Chaslot, a former Google engineer, found that “YouTube systematically amplifies video that are divisive, sensational and conspiratorial” (Lewis, 2018). YouTube’s recommendation system is optimized for people to spend as much time as possible to watch videos. Similar to studies of Facebook, YouTube’s algorithm was pushing videos that were helpful to Donald Trump and damaging to Hillary Clinton. According to Chaslot’s study, YouTube’s algorithm does not appear to be optimizing for what is truthful, balanced, or healthy for democracy (Lewis, 2018).

The biggest online platforms have deployed several attempts to combat the spread of false information on their platforms. Facebook has attempted to reduce financial incentives to create fake news websites, to flag fake news circulating in the newsfeed by cooperating with professional fact checkers, and to mark trusted news sources (Solon, 2016; Mosseri, 2018). On Twitter, more than 13,000 Russian-based bot accounts have been identified in 2017, and more than 670,000 users in the United States interacted with one of these accounts during the election season (Martineau, 2018). Twitter has since emailed nearly 678,000 users that may have inadvertently interacted with now-suspended accounts believed to have been linked to a Russian propaganda outfit called the Internet Research Agency (Vanian, 2018). Google argues that they have implemented structural changes in search algorithms to surface more high-quality content from the web while preventing the spread of offensive or clearly misleading content (Gomes, 2017). The company is also blocking websites from showing up in search results on Google News when they mask their country of origin or misrepresent their purpose.

Digital and social media have lowered the threshold for creating and circulating information—also disinformation and fake news. Additionally, low or declining trust in news media has been mentioned as another reason for the spread of fake news. These and other developments within journalism’s business model, have made it increasingly important for journalists and newsrooms to be transparent and build trust among readers.

Trust

Trust in information and news media is of paramount importance. Fake news is thus problematic for several reasons. Foremost, because it makes people confused about which information to trust or not. Two surveys, one from the United States (Barthel, Mitchell, & Holcomb, 2016) and another from Sweden (Ahlin & Benzler, 2017), showed that respectively 88% and 76% of the respondents replied that fake news made them very or somehow confused about basic facts (Ahlin & Benzler, 2017; Barthel, Mitchell, & Holcomb, 2016). If people are unable to differentiate between what is verified or false, whether one can trust news or not, it makes people confused about the state of affairs, particularly during an election when voters need reliable information to make an important political decision. But low trust in information and news media can also make it more likely for people to spread fake news and disinformation. As argued by some researchers, the declining trust in mainstream media could be both a cause and a consequence of fake news gaining more traction (Allcot & Gentzkow, 2017, p. 215).

According to Reuters Digital News Report, trust in news varies strongly among countries, with the US news consumers placed among the lowest trusting. While 47 and 57% of the respondents in respectively Norway and Denmark and 41% of the respondents in Sweden agree that you can trust news most of time, American respondents have less trust in news—only 34% agree that you can trust news most of the time (Newman et al., 2017). If people have low trust in news from mainstream media, which is the case in the United States, it might be more relevant for them to search out information from alternative sources. The partisan difference is particularly strong among American news consumers, and 2016 represented a new low in media trust among Republicans: only 14% had great or fair amount of trust in mass media while 51% of Democrats had trust in media (Swift, 2016). Even for Google, the partisan divide creates problems. Apparently, Google’s search algorithms have problems ranking truthful information when two groups oppose each other, and it is easier for Google’s algorithms to handle false or unreliable information when there is a greater consensus; it is more challenging to separate truth from misinformation when views are diametrically opposed (Tung, 2017).

Fake news masquerading as legitimate news is not only a problem for people consuming it, it might also undermine journalism’s legitimacy and trustworthiness. News is expected to provide “independent, reliable, accurate, and comprehensive information” (Kovach & Rosenstiels, 2007, p. 11), and news is normatively based on the truth, which makes the term fake news an oxymoron. Nevertheless, the term fake news has also been appropriated by politicians around the world to describe news organizations whose coverage they find disagreeable, thus using fake news as a weapon against newsrooms.

As of February 2018, President Donald Trump had tweeted 181 times about fake news in the 388 past days, one of his most tweeted terms, according to the Trump Twitter Archive. President Trump routinely invokes the phrase “fake news” as a rhetorical tool to undermine opponents, rally his political base, and to discredit mainstream American media that is aggressively investigating his presidency (Erlanger, 2017). Around the world, authoritarians, populists and other political leaders have seized on the phrase fake news as a tool for attacking their critics and, in some cases, deliberately undermining the institutions of democracy—often inspired by Trump. In countries where press freedom is restricted or under considerable threat—including Russia, China, Turkey, Libya, Poland, Hungary, Thailand, Somalia, and others—political leaders have invoked fake news as justification for beating back media scrutiny (Erlanger, 2017).

The concerns over fake news are growing, also for the weaponized use of fake news by politicians to undermine independent media and trust in journalism. The last section will focus on some of the attempts to counter fake news in the light of Lippmann’s insight: “no modern society lacking the wherewithal to detect lies could call itself free” (Lippmann, 1922).

Countering Fake News

After the spread of fake news, disinformation, and their problematic consequences has been identified, many attempts to counter fake news have been employed in different countries. We can differentiate between efforts directed toward legal, financial, and technical aspects to individuals’ media and information literacy and new fact-checking services. In January 2018, the European Commission appointed 38 experts to a new High Level Group on fake news and online disinformation to advise the commission on how to understand and tackle the phenomenon of fake news and disinformation (HLEG, 2018). The group clearly advices against simplistic solutions, such as censorship of free speech. As disinformation is a multifaceted and evolving problem, the report suggested focusing the response on five pillars: (1) enhance transparency of the digital information ecosystem; (2) promote media and information literacy to counter disinformation and help users navigate the digital media environment; (3) develop tools for empowering users and journalists to tackle disinformation and foster a positive engagement with fast-evolving information technologies; (4) safeguard the diversity and sustainability of the European news media ecosystem; (5) promote continued research on the impact of disinformation in Europe to evaluate the measures taken by different actors and constantly adjust the necessary responses (HLEG, 2018, p. 35).

Nevertheless, several countries have tried to ban fake news by introducing new legal measures. Germany introduced a new law in January 2018 to combat hate speech. The new law, Netzwerkdurchsetzungsgesetz, or NetzDG, demands that social media sites move quickly to remove hate speech, fake news, and illegal material (BBC, 2018). Social networks and media sites with more than 2 million members will fall under the law’s provisions. The law gives social networks such as Facebook, Twitter, and YouTube 24 hours to act after they have been told about law-breaking material. Sites that do not remove “obviously illegal” posts could face fines of up to 50m euro ($65m). Social networks and media sites with more than 2 million members will fall under the law’s provisions. A similar legal measure is introduced in France, where President Emmanuel Macron has proposed a law to ban fake news on the Internet during the French election campaign. Macron wants France’s media watchdog CSA to have the power to fight destabilization attempts by TV stations controlled or influenced by foreign states (Nielsen, 2018). In Ireland, a new bill has been introduced that is intended to make political advertising on social media more transparent. In Italy, the government has launched an online service aimed at cracking down on fake news (Guiffrida, 2018).

The legal measures put forward in Europe are not unproblematic. The German law has been controversial, as it has already created problems by confusing satire with hate speech. The concern regarding the approaches in Germany, Italy, France, and Ireland are that legal solutions to combat fake news and disinformation could lead to inadvertent censorship or curtail free speech.

The aforementioned legal efforts in Europe differ from the approach in the United States, where corporate efforts are mainly suggested to tackle the problems of fake news and disinformation online (Schiffrin, 2017). Other approaches are directed toward limiting the financial motivation for creators of fake news through advertising. Facebook and Google have attempted to reduce financial incentives to create fake news websites by restricting ads on fake news sites and prohibiting fake news sites from using their ad network—respectively Adsense and Audience Network (Love & Cooke, 2016). The activist group Sleeping Giants has attempted to combat fake news sites by going after the site’s advertisers. Programmatic ad buying, which is digital purchasing of advertisement, many companies don’t know where their advertisements appear around the web. Sleeping Giants communicates with companies and nonprofit groups whose ads appear on sites often known for false or misleading content, and encourages them to remove their ads from the sites.

Other approaches to combat and counter fake news have been directed at the users—readers, viewers, and listeners of information. Media literacy and education can help children and students to navigate between trusted and less trusted sources online. Governmental programs have been developed in several countries, for example in Italy to train students to recognize and counter fake news and conspiracy theories. During an expert meeting organized by the Nordic council, one of the suggestions from the expert group was to increase media and information literacy (MIL) in schools, to develop critical media users who can recognize disinformation (Bjerregård, Jensen, & Wadbring, 2017).

The spread of fake news has also triggered the establishment of new fact-checker services. The Norwegian fact-checking site Faktisk.no was initiated by four competing media organizations in 2017 (NRK, TV2, VG, Dagbladet) to “disclose and prevent the spread of fictitious content that appear as real news” (Faktisk, 2017). In addition to the increased attention to fake news and disinformation, the national election in Norway in 2017 was another reason why Faktisk was established. Similarly in Sweden, four competing media organizations (DN, SvD, SR och SVT) decided to create a fact checking service in 2018, Faktakollen, to counter fake news, particularly ahead of the 2018 national election in Sweden (Kihlström, 2018). The two new Nordic fact-checking services add to a growing list of international fact-checking services. As of February 2018, there are about 140 fact-checking services worldwide, according to Reporters’ Lab at Duke University, the most comprehensive database for global fact-checking sites (Reporter’s Lab, 2018).

Even though many of these attempts to counter and fight fake news and disinformation are promising, more needs to be done in this area. The spread of fake news and disinformation is an evolving, dynamic problem, and a single solution is not enough to combat it once and for all, which will be further discussed in the concluding section.

Future Challenges From Fake News

This article has discussed fake news and disinformation in relation to the four Cs: characterization, creation, circulation, and countering. Disinformation is not a new phenomenon, but new communication technologies have made it easier than ever to produce and distribute falsehood and lies, dressed up as news to gain trustworthiness. The societal concerns that disinformation raises are numerous, as this article has outlined. Nevertheless, there are more unanswered questions than solutions for how to tackle this problem. Further research on the scale and scope of disinformation in different countries is necessary in order to better describe the magnitude and characteristics of the problem. For students of journalism, the debate about disinformation has been a valuable reminder of the roots of journalism: critical evaluation of information and sources, accountability, and ethical codes of conduct. Increased efforts to enhance transparency, both in platforms information ecology as well as in journalistic methods, can in the long run increase trust in how information is handled and amplified by platforms and newsrooms. New tools and methods—including media and information literacy—to identify and detect manipulated content, either as text, images, videos, or audio, are needed to counter manipulation attempts by different actors. Rather than enforcing laws to abolish fake news, which has become a politicized term, political actors and institutions should recognize that they have an important role to play in improving the quality of the information ecosystem, through financing research and supporting independent media and by sharing data with the public.

Further Reading

References