While there are no easy fixes, there are many opportunities to mitigate and build resilience against digital disinformation. Too often, politicians, regulators, and technology companies focus their efforts on combatting fake news rather than finding ways to mend vulnerabilities in the whole digital ecosystem. The past few years have demonstrated how the internet is an incredibly effective environment through which disinformation can be embedded in the public digital sphere.

Although much of the current debate around disinformation focuses on content – the lies and confusion spread by fake news – it is the ability of those with malign intent to leverage the entire digital ecosystem which is the real challenge. Only through an understanding of the vulnerabilities inherent in the digital ecosystem will effective responses become apparent. Elucidating and addressing those vulnerabilities will require Herculean efforts from politicians, regulators, tech companies and everyone who enjoys the freedom and convenience of the digital universe. There are, however, steps that can be taken now to kickstart those efforts.

Transparency – Technology companies and regulators must push for and cooperate to increase the visibility of data flows occurring on a multi-platform level by opening up analysis of public data for the public good. In February 2019, for example, Mozilla issued an open letter to Facebook to demand transparency in the political advertising that occurred on the platform ahead of the 2019 EU elections. Several civil society organizations, groups, and companies, including Alto Analytics, supported this open letter. Facebook responded to this call by announcing that they would open up their political advertising archive in March 2019. Unfortunately, the quality and accessibility of the Facebook data is far from ideal. New laws and regulations to mandate active transparency would be helpful.

Restrictions – Restrictions and regulations on explicit hate speech and violent extremist content need to be strengthened and actively enforced. In 2017 Germany enacted a law widely known as NetzDG which requires social media sites to react quickly in removing hate speech, fake news, and illegal material in order to avoid potentially hefty fines. The vagueness of the criteria for what falls under this law and the concern for overreach of censorship have been consistent criticisms, exemplifying the complexities of regulating online speech. But as a start for what new regulations could look like, the law is a good start.

Regulation – Regulations stipulating the accountability of digital platforms need to be clearer and more enforceable. In the wake of the 2016 US presidential election, Facebook, Twitter, and other tech giants have faced increasing scrutiny. Politicians from across the globe have called for regulations ranging from imposing platform “duty of care” obligations to totally dismantling the platforms themselves. Although regulation is only one piece of the puzzle, clarity and enforceability are essential to any real accountability.

Education – Multi-layered investments in awareness and education programs which encourage individual responsibility and safeguarding across all age and demographic groups are essential. For example, the News Literacy Project, launched in 2013, works with journalists and educators to provide students with the critical assessment skills necessary to ask the right questions and discern fact from fiction. These types of capacity-building projects must be active and robust at each level of the digital ecosystem, from the consumers of information to journalists, lawmakers, technology professionals, and others.

Redesign – A reformulation of digital business models and shareholder incentives are crucial to redesigning the attention economy. One of NiemanLab’s 2019 predictions for the future of journalism points to a shift (or return) to a model in which the prime currency of consumer value is quality journalism backed by subscriptions and not only clicks and measurements seeking time and attention. Many media companies such as Bloomberg, Wired, BuzzFeed News, Medium, Business Insider, Quartz, and others have successfully moved to either paywalls or premium service models. The attention economy is proving to be harmful to both users and the digital environment, especially given the ease with which vulnerable users and entities can be exploited. A move toward models less dependent on time spent online and clicks could offer creative incentives for the digital economy to focus on “long-term value instead of short-term gratification,” as Gideon Lichfield of the MIT Technology Review states.

Journalism – Capacity building and support for both legitimate fact-checking organizations and entities that track companies’ public good accountability need increased attention and investment. A community media initiative known as the Listening Post Collective aims to provide journalists, newsrooms, and non-profits the tools and advice to create meaningful conversations in their communities. Facilitating these conversations involves listening in order to engage in journalism in ways that respond to communities’ informational needs, reflect their lives, and enable them to make informed decisions. Another global project known as First Draft fights misinformation and disinformation through fieldwork, research, and educational initiatives. First Draft hosts a global verification and collaborative investigation network through their CrossCheck initiative. CrossCheck connects journalists, academia, business, and civil society worldwide in order to further dialogue and develop solutions for effective journalism in a digital ecosystem grappling with constantly evolving challenges. These are powerful examples of local and global initiatives that are fundamental to achieving collaboration and consensus on journalism’s role in mediating the disinformation landscape.

Active Defense – An effective strategy to disseminate disinformation has been to attack the legitimacy of the institutions that represent the established and authoritative consensus. The attacks on experts have been particularly corrosive to science-based areas such as climate change and vaccines. Without support, expert institutions such as universities, academia, and in some cases government and other public institutions will struggle to maintain their reputation as trusted authorities. As a consequence, the loudest voices in public debate – whatever the facts – will prevail.

Limitations – The tactics used to target individuals and communities through advertisements, publicity, and other communications, such as those Russia successfully used in the 2016 US election, need to be better understood and actively moderated. Today, algorithmic acceleration of hateful and divisive content is more powerful than ever, making users of social platforms increasingly vulnerable to content that is intentionally targeted for their consumption. When companies, individuals, and other entities pay to target and reach individuals with laser-sharp precision, the digital environment suffers from serious problems and imbalances. In early 2018, Unilever and Procter and Gamble took a step in the right direction by threatening to pull ads from major digital platforms if the social media companies failed to address “toxic” online content head-on. Limiting the targeting of individuals and communities requires a multi-layered solution on the part of regulators, the business community, and the technology platforms.

Final Thoughts

The action points outlined here are a guide to what policymakers, technology companies, academia, and other key stakeholders can do today to address some of the key issues around digital disinformation. Some of the solutions suggested here will be complex, at least as complex as the intricate problems inherent in today’s digital ecosystem. In an environment in which all component pieces have interconnected causes and effects, it is imperative that solutions are thought of and implemented in a transversal manner, with the cooperation of many different stakeholders.

This article is part of a series by Alejandro Romero, CEO of Alto Data Analytics about the workings of disinformation in the digital ecosystem and was originally published on the Atlantic Council’s Disinfo Portal.Alex Romero is the CEO and Founder of Alto Data Analytics.

Interested in our work? Please contact us at info@alto-analytics.com. To learn more about Alto Analytics, subscribe to our newsletter below.