Disinformation in the Digital Age: the need to regulate | IIEA
Hit enter to search or ESC to close

Disinformation in the Digital Age: the need to regulate

Author: Clodagh Quain and Deirdre Ni Cheallacháin

Introduction

Disinformation has been defined as “all forms of false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit” by the EU Commission high-level expert group on Fake News and Online Disinformation. While disinformation is not a new phenomenon, the availability of individuals’ data, combined with the power of modern algorithms and the existence of digital platforms, has created optimal conditions for it to be weaponised. This weaponisation was evident during political events such as the 2016 Brexit referendum and the 2017 US Presidential election, when campaigns of disinformation conducted on digital platforms were targeted and wide-ranging. How best to address the dissemination of disinformation online, and who is best placed to so, has given rise to much discussion.

Digital platforms have been defined by McKinsey as “global platforms” which “allow a range of digital products to be built on top of them”. In the case of social media platforms, advertising and media are the digital products. While digital platforms offer considerable benefits, such as the increased access to information and the amplification of under-represented viewpoints, disinformation online can disrupt and undermine the functioning of democracies with considerable velocity.

Disinformation: a threat to democracy

Disinformation in the digital age risks corroding fundamental planks of democracy, such as independent evidence-based journalism and electoral integrity. Investigative journalism plays an informative role in the opinion formation process and is a vital force in a democratic society.

Journalists currently compete in an increasingly crowded market as digital platforms shore up the revenue of traditional content publishers. While free markets allow for fair and legal competition, content posted on digital platforms is not subject to the same filtering or editing process as the content hosted by traditional content publishers. At the EU level, for example, the eCommerce Directive 2000/31/EC currently exempts digital platforms from content liability after the content has been published (provided they remove illegal content, currently understood to be online terrorist propaganda, and speech inciting violence and hatred, after it has been brought to their attention). However, the eCommerce Directive 2000/31/EC was adopted in 2000 before the emergence of today’s media landscape, before platforms began to curate content for users.

Disinformation distorts the deliberative democratic process by affecting electoral integrity. Studies have shown that false information spreads “farther, faster, deeper and more broadly” due to its sensationalist and emotive nature. Advances in technology, particularly with regard to the improvements in editing techniques and the availability of personal data, have been such that people are increasingly consuming a personalised diet of disinformation in echo chambers, receiving versions of events that reinforce their existing conceptions. Consequently, citizens’ ability to make informed decisions is hampered. As freedom of expression is also a defining feature of a democratic society, this right needs to be preserved. However, how best to strike a balance between not interfering with freedom of expression and not allowing the democratic process to be obstructed by disinformation is under discussion among politicians, civil society and tech in international fora.

What can be done?

Digital platforms have been using self-regulatory measures to address disinformation and to ‘correct the record’. As part of efforts to counter disinformation for example, the social media platform, Facebook, can label posts where content is considered in part or wholly false by external fact-checking organisations. The impact of this measure however is unknown given that a large amount of unchecked content can still be disseminated privately through messenger applications.

As part of global efforts, the International Grand Committee (IGC) on ‘Fake News’ and Disinformation gathers international parliamentarians with expertise in the subject matter, to create a roadmap for a safer and more secure online experience for users. The third meeting took place in Dublin on 6-7 November 2019 after gatherings in Westminster in November 2018 and Ottawa in May 2019. On this occasion in Dublin, a group of states, including Ireland, agreed to recommend the introduction of a moratorium on micro-targeted political advertising containing false or misleading information until regulation was in place.

In October 2018, the European Commission, in collaboration with industry representatives, developed a self-regulatory Code of Practice on Disinformation which was signed by Facebook, Google, Twitter and a number of advertisers and the advertising industry, who participated in its development. The Code sets out broad commitments to address transparency in political advertising, fake accounts and the monetisation of disinformation.

In October 2019, the European Commission announced further plans to establish a European Digital Media Observatory as a hub for fact-checkers, academics and researchers to collaborate with each other and actively work with media organisations and experts. This initiative aims to provide support to public authorities with the development of an EU market of fact-checking services and to ensure secure access to platform data so that academic researchers can further understand disinformation. At present, researchers lack this data which can prevent the further development of evidence-based policy responses.

The EU: a Global Standard Setter?

Although the EU Code of Practice on Disinformation is significant in that it represents the industry-wide consensus on action to combat disinformation on a global scale, its efficacy in meeting the wide-ranging aims is contested as it is self-regulatory. Signatories commit to efforts that are “commerically reasonable“ , not objectively verifiable.

On 30 October 2019, the European Commission published the signatories’ first annual self-assessment reports under the Code. While Commissioner Julian King acknowledged that the monthly reporting carried out ahead of the European Parliament elections contributed to limiting the space for interference, he questioned the “actual impact” of the self-regulatory measures and the independent scrutiny mechanisms. The European Commission’s overall assessment of the effectiveness of the Code of Practice is ongoing and it will present its comprehensive assessment in early 2020.

The EU could play a role in establishing standards and best practices in this area as it has been at the forefront in developing regulation in the digital sphere, such as the GDPR.

The EU e-Commerce Directive is set to be replaced by a new Digital Services Act at the end of 2020. The comprehensive Act is expected to contain new transparency rules for political advertising and make it mandatory for digital platforms to subject their algorithms to regular checks which would contribute to combating disinformation. The Audio-Visual Media Services Directive (AVMSD) has also been revised and Member States have until September 2020 to transpose this version into national legislation. It expands the scope of the previous Directive to include video-sharing platforms, such as Youtube, and video-sharing platforms that allow live streaming, such as Facebook. They will have the same regulatory obligations for audio-visual content as traditional audio-visual media like TV and radio.  Digital Platforms that host audio-visual content will also have to adhere to standards of transparency regarding the commercial communications that are declared by the users when uploading content.

Taking Action at National Level

While political advertising is an integral part of the democratic process, it can be particularly susceptible to disinformation. The Government’s Interdepartmental Group (IDG) on the Security of the Electoral Process and Disinformation’s first report published in July 2018 determined that, while general risks to the electoral process in Ireland were low, the spread of disinformation online was substantial.

Transparency around political advertising, namely that online funded political advertising will require labelling, was identified as crucial in combatting disinformation. In response to this finding, on 6 November 2019, the Irish Cabinet approved a proposal to regulate Transparency of Online Political Advertising  to ensure that the public have access to “legitimate information” that enables them “to make autonomous voting decisions”. This represents the recognition of national legislators that there is a regulatory lacuna when it comes to online disinformation and that this should not be "left to the market"  to fill.

Conclusion

While digital platforms have societal benefits, they have also enabled the weaponisation of the disinformation phenomenon, affording it global reach at high velocity. The task of general oversight and verification requires a clear allocation of responsibility and transparency in the process; one that platforms are hesitant to manage alone.

The global nature of the challenge underlines the need to foster international cooperation and dialogue on effective regulatory approaches. Innovative proposals by CIGI such as that of a Digital Stability Board, akin to the Financial Stability Board established at G20 level, might be one option. Additionally, the recent IGC’s agreed moratorium on political advertising provides an interim solution until regulation is in place.

Gauging the appropriate and effective form of regulation is critical. Where there is a market failure to protect aspects of democracy, such as investigative journalism and electoral integrity, there may be more scope for regulatory intervention. The potential role of the EU as an international standard-setter is significant and the new EU Digital Services Act and the revised AVMSD are the first legislative steps in this direction.

The view expressed in this blog are those of the authors, and not the IIEA