Today, the European Commission published the latest reports by Facebook, Google and Twitter covering the progress made in March 2019 to fight disinformation. The three online platforms are signatories to the Code of Practice against disinformation and have committed to report monthly on their actions ahead of the European Parliament elections in May 2019.
Vice-President for the Digital Single Market Andrus Ansip, Commissioner for Justice, Consumers and Gender Equality Věra Jourová, Commissioner for the Security Union Julian King, and Commissioner for the Digital Economy and Society Mariya Gabriel welcomed the progress made in a joint statement:
“We appreciate the efforts made by Facebook, Google and Twitter to increase transparency ahead of the European elections. We welcome that the three platforms have taken further action to fulfil their commitments under the Code.
All of them have started labelling political advertisements on their platforms. In particular, Facebook and Twitter have made political advertisement libraries publicly accessible, while Google’s library has entered a testing phase. This provides the public with more transparency around political ads.
However, further technical improvements as well as sharing of methodology and data sets for fake accounts are necessary to allow third-party experts, fact-checkers and researchers to carry out independent evaluation. At the same time, it is regrettable that Google and Twitter have not yet reported further progress regarding transparency of issue-based advertising, meaning issues that are sources of important debate during elections.
We are pleased to see that the collaboration under the Code of Practice has encouraged Facebook, Google and Twitter to take further action to ensure the integrity of their services and fight against malicious bots and fake accounts. In particular, we welcome Google increasing cooperation with fact-checking organisations and networks. Furthermore, all three platforms have been carrying out initiatives to promote media literacy and provide training to journalists and campaign staff.
The voluntary actions taken by the platforms are a step forward to support transparent and inclusive elections and better protect our democratic processes from manipulation, but a lot still remains to be done. We look forward to the next reports from April showing further progress ahead of the European elections.”
Google reported on specific actions taken to improve scrutiny of ad placements in the EU, including a breakdown per Member State. It gave an update on its election ads policy, which it started enforcing on 21 March 2019, and announced the launch of its EU Elections Ads Transparency Report and its searchable ad library available in April. Google has not reported further progress regarding the definition of issue-based advertising. Similarly to the last report, global data was provided on the removal of a significant number of YouTube channels for violation of its policies on spam, deceptive practices and scams, and impersonation.
Facebook reported on actions taken against the ads that violated its policies for containing low quality, disruptive, misleading or false content or circumvented its systems. It provided further information on its political ads policy, which will apply also to Instagram. The company noted the launch of a new, publicly available Ad Library globally on 28 March 2019, covering Facebook and Instagram, and highlighted the expansion of access to its Ad Library application programming interface. Facebook reported on the number of fake accounts disabled globally in Q1 of 2019 and on the takedown of eight coordinated inauthentic behaviour networks, originating in North Macedonia, Kosovo and Russia. The report did not state whether these networks also affected users in the EU.
Twitter reported an update to its political campaigning ads policy and provided further details on the public disclosure of political ads in Twitter’s Ad Transparency Centre. Twitter provided figures on actions undertaken against spam and fake accounts, but did not provide further insights on these actions and how they relate to activity in the EU. Twitter did not report on any actions to improve the scrutiny of ad placements or provide any metrics with respect to its commitments in this area.
As part of the implementation of the Code of Practice, the platforms met with national regulatory authorities, part of the European Regulators Group for Audiovisual Media Services (ERGA) on 16 April 2019 to discuss the functionality of their political ads repositories.
The Code of Practice against disinformation
The monthly reporting cycle builds on the Code of Practice, and is part of the Action Plan against disinformation that the European Union adopted last December to build up capabilities and strengthen cooperation between Member States and EU institutions to proactively address the threats posed by disinformation.
The reporting signatories committed to the Code of Practice in October 2018 on a voluntary basis. The Code aims to reach the objectives set out by the Commission’s Communication presented in April 2018 by setting a wide range of commitments:
Disrupt advertising revenue for accounts and websites misrepresenting information and provide advertisers with adequate safety tools and information about websites purveying disinformation.
Enable public disclosure of political advertising and make effort towards disclosing issue-based advertising.
Have a clear and publicly available policy on identity and online bots and take measures to close fake accounts.
Offer information and tools to help people make informed decisions, and facilitate access to diverse perspectives about topics of public interest, while giving prominence to reliable sources.
Provide privacy-compliant access to data to researchers to track and better understand the spread and impact of disinformation.
Ahead of the European elections in May 2019, the Commission is monitoring the progress of the platforms towards meeting the commitments that are most relevant and urgent ahead of the election campaign: scrutiny of ad placements; political and issue-based advertising; and integrity of services.
The Code of Practice also goes hand-in-hand with the Recommendation included in the election package announced by President Juncker in the 2018 State of the Union Address to ensure free, fair and secure European Parliament elections. The measures include greater transparency in online political advertisements and the possibility to impose sanctions for the illegal use of personal data to deliberately influence the outcome of the European elections. Member States were also advised to set up a national election cooperation network of relevant authorities – such as electoral, cybersecurity, data protection and law enforcement authorities – and to appoint a contact point to participate in a European-level election cooperation network. The first meeting at the European level took place on 21 January 2019, the second one on 27 February, and the third on 4 April.
Today’s reports cover measures taken by online platforms in March 2019. They allow the Commission to verify that effective policies to ensure integrity of the electoral processes are in place before the European elections in May 2019.
The Commission will carry out a comprehensive assessment of the Code’s initial 12-month period by the end of 2019. Should the results prove unsatisfactory, the Commission may propose further actions, including of a regulatory nature.