Fandango

PEST analysis

PEST (Political, Economic, Social and Technological) analysis is a technique that considers the variables of the external environment that may impact the future of the project.

The following PEST analysis was drafted considering these elements for the FANDANGO project professional environment, in order to understand how technologies applied to misinformation detection and validation will enable new innovative scenarios in the Media sector.

Political factors

The FANDANGO project is meant to help media professionals disseminate correct in a context of global political instability and growing generation of misleading content, in a year in which Europe is playing a decisive game for global democracy. The European context, and especially, the electoral period from January to June 2019, will be the perfect breeding ground for the generation of all kinds of misinformation that affects candidates of one or another side, but above all threaten the neutrality of the media, the confidence of the people in mass-media and the national and European democratic quality. Having a large number of people in a society who are misinformed is a significant handicap for the democratic process: democracy relies on people being informed about the issues so they can have a debate and make a decision.

The European Commission published in December 2018 the Action Plan against Disinformation, which includes a set of actions aimed at building up capabilities and strengthening cooperation between Member States, EU institutions and the diverse stakeholders involved in the process to proactively address disinformation. The Action Plan states:

Disinformation is an evolving challenge, with high potential to negatively influence democratic processes and societal debates. Its increasingly adverse effects on society across the European Union call for a coordinated, joint and sustainable approach to comprehensively counter it.

A key stakeholder in the debate are the social media platforms (i.e. YouTube, Facebook, Twitter), that are receiving increasing pressure to monitor the content uploaded by their users and to try to limit the dissemination of extreme and/or fake content. Platforms have traditionally evaded these responsibilities, as summarised by Newman, writing for the Reuters Institute for the Study of Journalism, in “Journalism, Media and Technology Trends and Predictions 2019”:

Donald Trump says Facebook, Google and Twitter are intentionally and illegitimately suppressing conservative viewpoints while liberals accuse them of promoting extreme viewpoints. Damned if they do and damned if they don’t, platforms are no longer seen as neutral, condemned to become political footballs through this year and beyond. […] Platforms that rely on user-generated content and algorithmic recommendations have long resisted the notion that they are publishers. But they are now demonstrably facing up to their responsibility for the content they carry.

But the calls for regulation are getting louder. Theresa May, for example, said it very clearly at the World Economic Forum at Davos:

As governments, it is also right that we look at the legal liability that social media companies have for the content shared on their sites. The status quo is increasingly unsustainable as it becomes clear these platforms are no longer just passive hosts. […] We are already working with our European and international partners, as well as the businesses themselves, to understand how we can make the existing frameworks and definitions work better - and to assess in particular, whether there is a case for developing a new definition for these platforms.

Hence, we can start to see already a broad initiative against online misinformation by governments all around the world, including the European Commission. In some cases, these actions take the form of new laws, which impose new responsibilities to media organisations and social media companies. Daniel Funke, from the International Fact-Checking Network, maintains an ever-growing list of legislation, and warns about its widening scope: “these efforts raise questions about infringing free speech guarantees and are frequently victims of uncertainty. The muddying of the definition of fake news, the relative reach of which is still being studied, hinders governments’ ability to accomplish anything effective.”

List of government initiatives against online misinformation. Source: Funke.

Given the current international political context, the need for products like FANDANGO is clear, in order to face the wave of disinformation that Europe is suffering. The project should be able to leverage the growing interest by public authorities to raise awareness of FANDANGO and other market solutions and approaches, designed to detect and debunk hoaxes, while educating European from all ages in critical thinking.

Economic factors

The factors affecting FANDANGO were analysed in further detail in the following sections, as we explained current market conditions and trends. They will also be explored thoroughly in upcoming deliverables, as more detailed business requirements and exploitation plans are developed.

As a short advance, it’s important to note that one of the critical target audiences of FANDANGO -media professionals and institutions - are currently living through a (painful) transformation of their business models: a decline in advertising revenues - due to the collapse of paper-based ads combined with the fact Google and Facebook keep most of the profits of online advertisement - is forcing media organisations to bet on digital subscriptions, patronage and audience engagement.

Newman’s recent survey of 200 editors, CEOs and digital leaders for the Reuters Institute for the Study of Journalism’s, including “40 Editors in Chief, 30 CEOs or Managing Directors and 30 Heads of Digital […] from some of the world’s leading traditional media companies as well as digital born organisations” shows that subscriptions and memberships are becoming key priorities: “over half (52%) expect this to be the MAIN revenue focus in 2019, compared with just 27% for display advertising, 8% for native advertising and 7% for donations. This is a huge change of focus for the industry.”

“Q5. Which of the following digital revenue streams is MOST important for your company in 2019?” Source: Newman.

In parallel, a report by Sehl et al., “Pay Models in European News”, based on a sample of 171 among the most important news organisations in six European countries (Finland, France, Germany, Italy, Poland, and the United Kingdom), summarises the situation as such:

66% percent of the newspapers operate a pay model. Freemium models, where some content is freely available, but premium material only available for paying users, are the most widely used, followed by metered paywalls that allow free access to a limited number of articles each month before requiring payment.

71% of weekly newspapers and news magazines operate a pay model. Again, freemium models are the most widely used, followed by metered paywalls.

We thus find that most newspapers and news magazines across Europe are moving away from digital news offered for free and supported primarily by display advertising, and are cultivating a wider range of sources of revenue, including various pay models in addition to native advertising, ecommerce, events etc.

The research suggests that some people across all age groups, including younger users, are willing to pay for quality content and services that they find valuable. Converting a reader into a subscriber, however, in an environment where news is often available freely in multiple channels, requires branding, a strong identity and trust, i.e. an investment into the production of high-quality, thorough, fact-checked, verifiable content. That is, a move away from cheap, click-bait, viral content, which facilitates the dissemination of misinformation.

Also noticeable in Newman’s survey, “over three-quarters (78%) think it is important to invest more in Artificial Intelligence (AI) to help secure the future of journalism – but not as an alternative to employing more editors”. Both the push for high-quality content and the increasing importance of AI in newsrooms are encouraging trends for FANDANGO.

Social factors

Nowadays, technology and the Internet give us access to vast amounts of information, often more than we can assimilate carefully. Additionally, truth is no longer dictated by media outlets acting as gatekeepers, but it is networked by peers, and for every fact there is one or more rebuttals. These social tendencies, which enable the creation of content “to the reader’s liking”, as well as the appearance of filter bubbles and echo chambers, will continue to be present in European societies by the time FANDANGO goes out to the market and will actively impact on the demand that people make of information.

New unfolding trends of information consumption may influence FANDANGO’s exit to the market. For example, Buffer -developers of a well-known app for social content sharing- predicts the growth of immediate content via informal small groups (“In-the-moment content will win out over highly-produced content. The stories in social networks and the ephemeral videos will be the most consumed content, overtaking the Media and the News Feed.”) and growing influence of AI in the filtering and personalisation of the user experience. As Alexios Mantzarlis, former Director of the International Fact-checking Network, warned during ICT Vienna 2018, as the information consumption trends evolve, so do the fake news, which will be a challenge for all the initiatives fighting misinformation, including FANDANGO.

But there’s one particular factor that must not be ignored when looking at the explosive growth of the issue of fake news and misinformation since the 2016 US elections: a widespread lack of trust on traditional media sources by readers, which can be seen clearly in a number of studies and opinion polls.

For example, a report by Newman and Fletcher for the Reuters Institute for the Study of Journalism, “Bias, Bullshit and Lies – Audience Perspectives on Low Trust in the Media”, which tries to explain “the underlying reasons for low trust in the news media and social media across nine countries (United States (US), UK, Ireland, Spain, Germany, Denmark, Australia, France, and Greece)”. The authors conclude, after analysing thousands of open-ended responses, that the recent viral fake news is perceived by readers of a symptom of a low-quality information environment, together with hidden media agendas, clickbait material, misleading advertising, sponsored content, and partisan takes. In their own words:

Among those who do not trust the news media, the main reasons (67%) relate to bias, spin, and agendas. Simply put, a significant proportion of the public feels that powerful people are using the media to push their own political or economic interests, rather than represent ordinary readers or viewers. These feelings are most strongly held by those who are young and by those that earn the least.

In many countries, particularly the US and UK, some media outlets are seen as taking sides, encouraging an increasingly polarised set of opinions. Others are criticised for not calling out lies, keeping information back, or creating a false equivalence of partisan opinions that are obscuring facts and understanding.

These findings match a second report, by Kleis Nielsen and Graves, “News you don’t believe: Audience perspectives on fake news”, where the authors warn about “a wider discontent with the information landscape” and the fact that “people feel much of the information they come across, especially online, consists of poor journalism, political propaganda, or misleading forms of advertising and sponsored content.” The authors point to some structural changes as possible explanations for this phenomenon, such as the loss of the gatekeeping role by traditional media outlets:

The second structural change is the move from a twentieth-century environment dominated by broadcast and print mass media to an increasingly digital, mobile, and social media environment. Publishers are still critically important as producers of news in this landscape but play a less central role as distributors and gatekeepers, as audiences have greater choice and as a small number of large platform companies increasingly shape media consumption through services like search, social media, and messaging applications […]. In this environment, it is easier to publish any kind of information, including false and fabricated information.

The survey data shows that less than half of online news users in the US and the UK trust “most of the news most of the time”. In three of the four countries, nearly half of the population don’t express trust even in the news they consume. When asked about their trust on news consumed via social media, and whether online platforms help users “distinguish between fact and fiction”, the responses are similar and highly correlated, i.e. readers sceptical of one type of source rarely trust another, in what the authors describe as “generalized scepticism”, reusing a term from Newman and Fletcher.

Trust in most news versus trust in my news. Source: Nielsen.

Even worse, established media and news agencies outlets are being actively targeted by malicious actors in their attempt to spread disinformation, since their news reporting reaches a substantial amount of readers, on paper or online. To leverage this reach, disinformation agents consciously attempt to deliver misinformation to them, with the goal of getting a hoax endorsed, shared or retweeted; even denial can be useful in their eyes, in what is sometimes called “strategic amplification”. Claire Wardle, News Research Director of First Draft, talks about the “trumpet of amplification”:

Disinformation often starts on the anonymous web (platforms like 4chan and Discord), moves into closed or semi-closed groups (Twitter DM groups, Facebook or WhatsApp groups), onto conspiracy communities on Reddit Forums or YouTube channels, and then onto open social networks like Twitter, Facebook and Instagram. Unfortunately, at this point, it often moves into the professional media. This might be when a false piece of information or content is embedded in an article or quoted in a story without adequate verification. But it might also be when a newsroom decides to publish a debunk or expose the primary source of a conspiracy. Either way, the agents of disinformation have won. Amplification, in any form, was their goal in the first place.

Trumpet of Amplification, by Claire Wardle (First Draft).

But all is not lost: while many readers complain in surveys and focus groups about the overall low reliability of media, they often save a few trusted news outlets and see them as potential leading examples and correctives:

[People] often have a dim view of tabloid media and of partisan outlets that they disagree with politically. But many identify specific organizations they would turn to if they need credible information. One participant in one of our London groups said: “The Times [has] a paywall and the reason they do it is that it costs a lot of bloody money to verify these facts, so there is no fake news here. Then whatever out there in the wild is feral news, fake news.”

Combined with the push for differentiation and digital subscriptions by media outlets described in the previous section, we can see the current low trust as a potential beneficial factor for FANDANGO: outlets need to invest in high-quality content in order to regain the lost readers’ trust and confidence, and any tool that helps them avoid mistakes and propagate misinformation is a good investment.

Technological factors

Technology is increasingly seen as an enabler for more accurate journalism, and new techniques and methodologies, such as Open Source Intelligence - pioneered by digital organisations such as Bellingcat -, previously seen as niche, are now becoming mainstream. Bellingcat, leveraging freely available datasets and tools such as Google Earth, reverse image recognition and expert geolocation, has published important exclusives on issues such as the downing of the MH17 plane in Ukraine or the Novichok murders in the United Kingdom. In a similar manner, BBC News Africa was able to verify murders by soldiers from the Cameroon army. Artificial intelligence, natural language processing and big data are now being applied on complex tasks such as user content verification and fact-checking.

Of particular interest to FANDANGO is the application of deep learning and video manipulation to create new forms of misinformation, e.g. what is commonly known as “deep fakes”. Newman predicts the popularization of this content, as the technology involved, previously complex and demanding specialized resources, becomes available to a general audience:

With sophisticated video manipulation technology now openly available many fear a new wave of misinformation. Software such as Fake App has made the technology easy to apply for those with basic programming skills – not least with free tutorials readily available on YouTube. A video of Barack Obama calling Trump a ‘total and complete dipshit’, was released in April by film director Jordan Peele in conjunction with BuzzFeed as a way of raising awareness about how AI-generated synthetic media could be used to distort and manipulate reality. Deep fakes have also been widely used to insert celebrities into pornography and to add the actor Nicolas Cage into movie scenes. Expect a proliferation of deep fakes in 2019 including the first attempts to deploy these powerful technologies to literally put words into the mouths of political opponents.

The increased sophistication and volume of misinformation, and their increasingly visual nature, highlights the need for further research in the application of AI, Big Data and Machine Learning, as well as platforms like FANDANGO. It also highlights the need to continue evolving FANDANGO’s features and capabilities after its launch, in order to adapt to the evolving characteristics of misinformation.