Information wars

Where freedom of speech flourishes, so does misinformation. With more than half the global population now using social media, striking the right balance in managing online spaces is critical for healthy democracies, public safety, and achieving the SDGs

Peace and securityGlobal

People board a train in Lviv to escape the conflict in Ukraine. The discourse on the origins of the war, its course and its potential resolution has been flooded with misinformation. © Pavlo Palamarchuk/Reuters

In recent years there has been much discussion about the balance between fostering freedom of information and combating misinformation online. In the era of social media, we are faced with contradictory tendencies. On the one hand, opportunities for the circulation of information have grown. But on the other, cases of abuse, misinformation, and political manipulation have skyrocketed.

These issues have become the object of intense attention in recent years. In the aftermath of Donald Trump’s election as US president in 2016, journalists and activists raised concerns about the way in which many right-wing populist candidates had profited from the circulation of misinformation online. Similarly, during the pandemic, the use of social media was widely decried as having contributed to the diffusion of fake news – from the promotion of dangerous alternative cures to the more general suspicion of science. Further, the spotlight has been drawn to manifold forms of online abuse, from rape threats to harassment, and the use of social media to fuel slander and false accusations.

These various events have already led to significant changes in the way in which information is managed online. Facing pressure from policymakers and the public alike, companies such as Facebook, Twitter, and YouTube have been forced to abandon their claim to neutrality over the content they host. Instead, they have progressively employed moderators to delete illegal and harmful content, and to ban accounts that violate platform rules. Facebook alone employs around 15,000 content moderators. The companies have also attempted to tighten their codes of conduct and community guidelines in response to growing calls from users for greater protection from extreme content. 

A delicate balance

These issues came to a head in January 2021. In the aftermath of the Capitol Hill riots, Facebook, Twitter, and other social media platforms banned the accounts of Donald Trump, American TV host Tucker Carlson, and other right-wing media politicians and pundits. Many countries have since looked to tighten their legislation on online content. The EU Digital Services Act requires digital companies to police for illegal content or face billions of dollars in fines. In the US, digital companies have for many years been deemed to fall under First Amendment protection. Yet, there are bipartisan laws currently being discussed in Congress that may make these firms liable for user-generated content. In the UK, the Online Safety Bill proposed in April 2022 asserts that digital companies have a duty of care to tackle not only illegal but also harmful content. 

This growth in moderation and regulation practices has raised concerns about the risks of increased censorship online. Critics on the left argue that corporations will be able to censor content that goes against their corporate interests. Those on the right, meanwhile, warn of a “cancel culture” in which progressive elites, such as those deemed to be in control of Silicon Valley, engage in stifling public debate. These latter criticisms have been most recently voiced by Elon Musk, who, as part of his bid to buy Twitter, has argued that social media has become hampered by censorship, and that social media companies are treating their users like children. Musk has promised that if he were to control the platform he would ensure that restrictions on Donald Trump and other accounts are lifted. Many other conservative advocates, such as those of the Cato Institute, have also countered any move toward the public regulation of online content. 

This situation raises important questions for what would constitute a healthy contemporary democracy. What can be done to achieve the right balance between free speech and policing misinformation? This question is relevant to UN action and the Sustainable Development Goals (SDGs). Even though there is no specific SDG related to information, opinion, and the internet, the questions raised by online freedom of expression are relevant for many of the SDGs, such as good health and well-being, gender equality, reduced inequality, climate action, peace, justice, and strong institutions. In fact, it has been widely recognized both by academics and policymakers that without a healthy public sphere it is impossible to foster dialogue and nurture the political resolve that is necessary to address these issues. 

Regulation of social media and the internet has often been voiced in other UN actions and resolutions. For example, the UN Human Rights Council has condemned internet access disruption as a violation of human rights, while UNESCO has launched a series of actions to foster freedom of information in line with its constitution, which promotes the “free flow of ideas by word and image.” In June 2020, UN Secretary-General António Guterres launched a Roadmap for Digital Cooperation and re-asserted in January 2021 the need for a global regulation framework to ensure that digital companies do not overplay their hand. 

As these initiatives suggest, online freedom of speech is not in contradiction with internet and social media regulation. In fact, the two should go hand in hand. Regardless of specific technological conditions, “free speech” is never completely free: even in countries that celebrate their freedom of speech most vocally, there are always forms of censorship for extreme content, and libel laws to counter reputational harms. Many forms of censorship, such as those against child pornography, present in virtually all countries, or against incitation of terrorism and other criminal acts, are widely accepted by most people as necessary for a healthy public sphere. It is also mostly uncontentious that dangerous content that may result in damage to health and the public – such as quack medical information – should be made illegal. The question is not whether this sort of content should be regulated, but how. 

I recommend three courses of action.

First, attempts at state regulation of social media companies, such as those afoot in the US, EU, UK, and other countries, should be welcomed. They are filling an evident regulatory gap, where pre-existing laws on illegal content and libel are not matching the radical change in structural conditions brought by the diffusion of the internet and social media. Companies’ self-regulation can still play an important role. But, given its politically sensitive character, the regulation of public speech should rest as much as possible on democratically accountable state legislation. This will also prevent companies interfering in public and political debate.

Second, at the same time, such state regulation should be as light-touch as possible. It should clearly identify the most extreme and dangerous types of (illegal) content that digital companies should stamp out but carefully avoiding intervention in areas where it is more difficult to make such discernments. Regulation should not police content that is potentially harmful yet legal, as bills such as the UK’s Online Safety Bill have been accused of attempting to do. Legislation should also allow users to remain anonymous on social media. For example, LGBT people, victims of domestic violence, endangered journalists, activists, and others, could well be at risk of danger or harm if they were deprived of the cover of anonymity. 

Third, governments and civil society organizations need to promote a wider range of social media platforms. This means not just more competition among private companies, but also more alternatives that are public or not-for-profit. As Elon Musk has rightly said, social media has de facto become the new town square. However, traditional town squares were never the private property of multinational corporations. Hence, resources should be dedicated to establishing platforms that are not controlled by corporate interests.

The internet and social media are key spaces for opinion and freedom of expression. To guarantee that they contribute to the promotion of vibrant and cohesive polities capable of delivering on the SDGs, we must strike a careful balance between freedom of expression and regulation. The former cannot exist without the latter.

Share post:

Related articles