Tech platforms have become virtual battlegrounds, allowing hostile foreign powers to use them to spread disinformation across the EU. Eight European Prime Ministers addressed an open letter to big tech companies urging them to join forces with democratic governments and civil society and work together to protect the integrity of information and ensure the security of the European states.
The Prime Ministers of the Republic of Moldova Dorin Recean, the Czech Republic Petr Fiala, the Slovak Republic Eduard Heger, Estonia Kaja Kallas, Latvia Krišjānis Kariņš, Poland Mateusz Morawiecki, Lithuania Ingrida Šimonytė, and Ukraine Denys Shmyhal, Prime Minister of Ukraine signed the “Open Letter to Big Tech Companies”.
“Dear CEOs of Big Tech,
We are writing to you with a sense of urgency and a call to action. Democracies across the world are fighting against disinformation that undermines our peace and stability, and we need your support to emerge victorious.
Tech platforms like yours have become virtual battlegrounds and hostile foreign powers are using them to spread false narratives that contradict reporting from fact-based news outlets. Disinformation is one of their most important and far-reaching weapons. It creates and spreads false narratives to strategically advance malign goals.
Moldova has been at the forefront of an information war since Russia’s brutal invasion of Ukraine, her next-door neighbour. However, all our countries are under attack, too, because while direct targets differ, the ultimate goals of information warfare are universal.
Foreign information manipulation and interference, including disinformation is being deployed to destabilize our countries, weaken our democracies, to derail Moldova’s and Ukraine’s accession to the European Union and to weaken our support to Ukraine amid Russia’s war of aggression.
Social media has become a potent channel for spreading false and manipulative narratives. Paid ads and artificial amplification on Meta’s platforms, including Facebook, are often used to call for violent social unrest, bring violence to the streets and destabilize governments.
Big tech companies should be vigilant and resist being used as means of advancing such goals. They should take steps to ensure that their platforms are not being used to spread propaganda or disinformation that promotes war, justifies war crimes, crimes against humanity or other forms of violence.
Big Tech companies should increase cooperation and engagement with a wide range of stakeholders – governments, civil society, experts, academia, independent media and fact-checkers. They are essential partners for an effective whole-of-society response to the threat.
While it is commendable that social media companies continuously update their content moderation policies, upgrade their moderation capabilities, apply content labels, or introduce restrictions to sharing of content via messaging apps, more needs to be done.
Several concrete actions can help our common cause:
Online platforms should take concrete measures to prevent their services from being used as tools and means to advance malicious objectives. This includes refraining from accepting payments from individuals who have been sanctioned for their actions against democracy and human rights.
Algorithmic designs should prioritize accuracy and truthfulness over engagement when promoting content. They also must be more transparent. The public should know about online platforms’ policies and how they are enforced. It is crucial that the research community has free or affordable access to platforms’ data to understand tactics and techniques of manipulative campaigns and hostile actors.
Platforms should dedicate adequate staff and financial resources to effectively respond to the challenges of content moderation, particularly in the complex field of hate speech, where automated algorithms may not suffice and human review is crucial.
Platforms should address the growing threat to democracies posed by deepfakes and other AI-generated disinformation pieces, especially from hostile foreign actors. Platforms must ensure that deepfakes and texts written by artificial intelligence should be clearly marked, to identify automated manipulative campaigns. Thus, sustained investment into tools for identification of deepfakes and automatically generated texts are needed.
A consistent global approach to regulation – and self-regulation by big tech – is needed to respond to these issues. The global dominance of a limited number of players makes this need even more pressing.
This is a call to action because foreign information manipulation and interference, including disinformation campaigns pose a threat to democracy, stability, and national security. Big tech companies have the power to be vital allies in our common effort to tackle hostile information attacks against democracies and international rules-based order. We urge you to join forces with democratic governments and civil society and work together to protect the integrity of information and ensure the security of our societies.
Sincerely”