QEB Hollis Whiteman’s Ruth Broadbent examines the Digital Culture Media and Sport Committee’s interim report into disinformation and fake news
Fake news is nothing new, but in the last two years it has taken centre-stage as a concept and a concern. In January 2017, in the wake of public anxiety about, ‘the perceived trend for the public to distrust traditional sources of news’ Parliament set up an inquiry into ‘the growing phenomenon’. The Digital, Culture, Media and Sport Committee’s first interim Report, Disinformation and ‘fake news’ published this August makes interesting reading; the topics of examination range from digital literacy to psychological profiling and the use of data to interfere with political campaigns (in the UK, Argentina, Nigeria, and Trinidad and Tobago to name a few). The broad scope of the report is largely due to the flexibility of the inquiry’s terms of reference, the first of which asks: ‘what is ‘fake news? Where does biased but legitimate commentary shade into propaganda and lies?’ These questions remain unanswered and this reticence at once saves the report from incoherency whilst revealing the difficulty of regulating the field.
Any report into fake news risks falling foul of the fallacy inherent in the term itself. Fake news implies such a thing as true news. Yet while the report identifies the ambiguity in the term, it falls short of locating the distinction between ‘legitimate commentary’ or acceptable news and ‘lies and propaganda’, or unacceptable news. This is likely to hinder attempts to regulate against disinformation and fake news. Until the Committee recognises that these are concepts which are not based in fact, but built upon consensus of opinion and perspective, the ability to regulate content is likely to be limited.
The Committee does not provide a definition of fake news (or of its preferred term disinformation) but it clearly does not relate simply to the recounting of false facts. Since 2016, when BuzzFeed broke the story of teenagers in Macedonia profiting from the publication of fictional stories about Donald Trump, the term has rapidly evolved to embody a great spectrum of (primarily) journalistic activity, from coverage which is completely false, through to misleading or hyper-partisan content. The rapid transformation in the significance of fake news exemplifies how language not only represents, but creates. With every use of the term fake news, or disinformation, the meaning is refracted and so the speaker redefines the concept to which it alludes. Donald Trump at a press conference telling a CNN reporter, ‘[y]ou’re fake news’ before refusing to answer questions from him is an extreme example of the way in which the public speaker inflects new meaning into the term with each usage. As the Committee recognises, fake news is a moveable feast, the term ‘bandied about with no clear idea of what it means, or agreed definition’.
The inevitable follows: if there’s no agreed definition, what are we talking about when we talk about fake news? The Report opens with a description of how fake news threatens ‘our democracy and our values’ but the threats identified within the Report are not all obviously concerned with the nature of purported news. Vote Leave has been referred to the Metropolitan Police for possible false declaration of campaign spending concerning a payment to AQI, a company, ‘focused on the use of data in campaigns’ and Leave.EU is alleged to have obtained data from an insurance firm in order to ‘push their message to groups of people they might not otherwise have had information about’ during the Referendum. These examples of alleged election fraud and data misuse are not primarily concerned with the content of the message they promote. Yet the determined ambiguity of the term fake news elides these discrete events under the same banner and so the narrative that our democracy is under immediate threat is propagated and reiterated.
Perhaps more relevant to the study of the dissemination of ‘false and hyper-partisan content’ is the Committee’s investigation into the influence of Russia in political campaigns. It states that it knows, ‘Russians used sophisticated targeting techniques and created customised audiences to amplify extreme voices in the [US Presidential Election] campaign’. This is, the Report implies, ‘disinformation [as] an unconventional warfare, using technology to disrupt, to magnify and to distort’. The process is strikingly similar to that used by Cambridge Analytica and its associated companies in the SCL Group, subsidiaries of which have engaged in political campaigns around the world since 1994 by ‘using specialist communications techniques previously used by the military to combat terror organisations, and to disrupt enemy intelligence’. Disinformation may be unconventional warfare, but it is not unusual. The long list of foreign elections which SCL have allegedly influenced through the ‘distortion of facts, or by the micro-targeting of voters’ is demonstrative of the frequency with which interested parties have engaged in a war of ideologies and influence long before 2016.
Evidently, the spread of hyper-partisan views is not limited to the Internet, or Russia. The British press is a stalwart of biased reporting. Speaking of the reporting of the 2015 general election Andrew Neil, chairman of the Spectator, noted how, ‘all pretence of separation between news and opinion [is] gone, even in “qualities”’. Pretence is the operative word – the news is always subjective. As MacDougall writes in Interpretive Reporting:
at any given moment billions of simultaneous events occur throughout the world… all of these occurrences are potentially news. They do not become so until some purveyor of news gives an account of them. The news, in other words, is the account of the event, not something intrinsic in the event itself.
And so, attempts to wholly remove opinion or bias from reporting are doomed to fail.
Perhaps this explains why at its heart the Report is concerned with data misuse as opposed to false narratives. Cambridge Analytica’s working model demonstrates the way in which data becomes a valuable asset, enabling interested parties to target voters individually and in the case of Cambridge Analytica, to supposedly underpin facts with emotions based upon the psychological profile of the audience. Aaron Banks of Leave.EU described the micro-targeting of individual voters to the Committee:
‘my experience of social media is it is a firestorm that, just like a bush fire, it blows over the thing. Our skill was creating bush fires and then putting a big fan on and making the fan blow […] the immigration issue was the one that set the wild fires burning’.
In other words, data profiling enables individuals to access information pertinent to them in terms of fact and opinion: the echo chamber is born. Here lies the difference between traditional and online media: the content is largely similar, but the method by which it is disseminated alters the resonance and significance it holds for the consumer.
As long as tech companies allow for the use of data to target individuals, the Internet will be a forum for the effective dissemination of partisan narratives. Attempts to police the narratives of these echo chambers will need to differentiate between content, which is always infused with opinion and some bias, and the method of dissemination. To a degree our data protection law already empowers regulators to deal with the data misuse, an enabler of micro-targeting. There are limitations to the current provisions and the Report makes a number of recommendations, but the proposed changes are not revolutionary because they do not need to be. Where there are significant legal obstructions to change, the Committee often rises to the radical challenge, for instance it recommends the end of the publisher/platform debate when defining the scope of tech companies’ liability and suggests a new category which will make companies accountable for the material disseminated on their sites. It comes with some surprise then, that the Committee rejects the Institute of Practitioners in Advertising’s call for a total ban on micro-targeting political advertising online. Instead, the report notes that, ‘micro-targeting, when carried out in a transparent manner, can be a useful political tool’.
Useful or harmful is a question of perspective. We return to the perennial problem of fake news – the term only encompasses what threatens our interests, what Russia deems fake news or threatening is likely to differ. The subjectivity of the concept, paired with the international reach of social media renders effective regulation of online content unlikely in the current climate.