How can we improve the public understanding of Deplatforming certain public figures

Arjuna Sathiaseelan
4 min readJun 5, 2021

In the light of current controversies about the implications of tech companies “deplatforming” certain public figures, I wanted to pen my thoughts on what distinctive contributions to the debate could the academic community make to improve public understanding of the issues that are involved.

Deplatforming has been seen as a means to reduce online harm. Major social platforms have always been the main congregation points for far right, neo nazi, islamic fundamentalists etc because of their immense public reach. It is known that when these groups and representative public figureheads were deplatformed from the major social platforms, they move on to lesser known alternative platforms that support freedom of speech such as 4chan, gub, parlour, reddit or more privacy preserving mediums like signal or telegram.

Debates (& some interesting research questions):

a. Is deplatforming effective?

Studies have shown that (1) when these groups move from one platform to another, they loose their followers and hence the effectiveness and in some cases into complete oblivion. (2) Studies have also shown that when these groups move into a lesser moderated platforms that promote freedom of speech, they become more virulent in nature making it incredibly difficult to monitor by state actors — hence having the potential to inflict more harm. It is mandatory to educate the public (and the diverse stakeholders) on the implications of deplatforming based on data.

b. Is deplatforming curbing freedom of speech?

There is a debate whether deplatforming is a violation of freedom of speech. Section 230: Communications Decency Act in the US and the Digital Services Act in the EU — ensure that platform providers who are intermediaries are legally not liable for the content posted by their end users. In the US for e.g. these tech actors cannot be regulated because of the protection offered by the First Amendment where tech actors are protected from any interference from the government when it comes to regulating public speech. Hence the tech actors have full control in moderating what is allowed on their platforms. So the debate of how much control should tech actors have in controlling freedom of speech and should they be regulated/enforce democratic oversight to ensure their practices are in tandem with human rights laws. There is a much larger problem space that is currently being overlooked: Several global south countries have far more draconian laws imposed by state actors curtailing freedom of speech. Their voices need to be heard.

c. Lack of consistent approach in deplatforming?

There is a genuine concern whether deplatforming Trump was the right approach considering that for several years platforms have been used an effective medium for online harm by larger political figures. Eg include the supreme leader of Iran who has been openly inciting violence against Israel continues to exist on Twitter. Facebook was used to incite violence against the Rohingya refugees. Facebook’s response at that time to the crisis was abysmal. In the Trump case, there is a genuine need to listen to the voice of the supporters/conservatives who account for the 74 million registered votes for Trump. There is concern that banning Trump was more politically motivated.

d. Should tech actors have too much power?

Need to break monopolistic/oligopolistic nature of tech actors. Should they be broken up based on national boundaries? or alternatives: state owned networks or decentralised infrastructures including community owned networks,..

e. Would alternative methods for moderation strike a balance?

AI moderation tools that hide hate speech/offensive content (based on community feedback), ranking algorithms based on community feedback.

Addressing these debates/research questions require sustained stakeholder engagement (including public consultation with representative groups ensuring inclusivity aka the global south) where the discussions are driven by meaningful (qualitative & quantitative) data.

Some thoughts on how the academic community can contribute:

1. Establishing a working group and bring in the different stakeholders: government, tech actors, regulators, civil society organisations, media and the public and facilitate meaningful discussions. Facilitate regular seminars, workshops and round table discussions to generate quantifiable outcomes via reports, whitepapers, policy recommendations. Also participate in other similar working groups like the Free Speech and Intermediaries Working Group (FSWG) chaired by the Center for Democracy and Technology (CDT) and facilitate data/outputs sharing.

2. The working group will be driven by quantitative and qualitative data: Carrying out large scale/longitudinal studies including public sentiment analysis supplemented with public consultations through surveys, case study interviews (utilising innovative user feedback approaches through gamification, creative story telling etc). Opportunity to carry out direct research as well as via collaboration. Open source all data, tools & processes.

3. Outreach: Through media as well public forums disseminate key findings to the public (come out with innovative approaches e.g. gamification, story telling, memes etc for maximum reach and engagement).

--

--