Skip to main content

Cookie settings

We use cookies to ensure the basic functionalities of the website and to enhance your online experience. You can choose for each category to opt-in/out whenever you want.

Essential

Preferences

Analytics and statistics

Marketing

Algorithmic censorship, the best tool against online hate speech?

October
21
2021
  • 🟠 Face-to-face: at the Canòdrom stage (It will NOT be streamed).
    Carrer de Concepción Arenal 165, El Congrés i els Indians, Barcelona, Barcelona, Catalunya, Espanya
  • 18:30 PM - 19:00 PM CEST
OpenStreetMap - Carrer de Concepción Arenal 165, El Congrés i els Indians, Barcelona, Barcelona, Catalunya, Espanya
Avatar: Official meeting Official meeting


🎙Xarxa de Ràdios Comunitàries de Barcelona
🎙Spotify
🎙Ivoox

In view of the growing concern about the spread of hate speech on the Internet and especially on social platforms, the Ministry of Inclusion, Social Security and Migration has unveiled an algorithmic protocol aimed at automating the detection of such illegal speech. It is a natural language processing system, trained on the basis of a series of keywords, which are considered to be decisive in the detection of possible hate speech. 

This decision is part of a context in which the use of artificial intelligence systems by public institutions is on the rise, although it is often left out of public knowledge and debate. This paper analyses this problem from the perspective of democracy, social justice and the defence of human rights.

A first question to be assessed is how this system can affect society and, in this particular case, the fulfilment of fundamental rights such as privacy, data protection or freedom of expression. In order for this reflection to take place, it is important to demand, on the one hand, transparency about its objectives, the actors involved in its adoption, the people or groups affected, etc., and, on the other hand, the importance of establishing a social consensus on the matter.

Secondly, it is important to pay attention to the more technical aspects of this technological tool, in order to assess whether it is coherently adapted to the social reality in which it operates. This implies knowing how they are configured: on the basis of what criteria they are programmed, how they are learned, the potential biases involved, etc. In most cases, these systems operate as black boxes, preventing understanding, democratic control, and even traceability of the motives that lead to the final decision.

The algorithm included in this government protocol is a pilot test, but what will its impact be when it is put into use? In the field of freedom of expression, the companies that manage social platforms have been criticised for years for the opacity of their content moderation criteria, which are neither public nor do they respect international standards. The fact that public institutions are now trying to establish a protocol to overcome this monopoly of private companies is good news. But to prevent public tools from falling into the same problems of opacity and anti-democratic functioning that have existed up to now, it is necessary to promote free and informed citizen debate on the type of technology we want and what we want it for, and also to establish effective mechanisms for collective control and decision making.

* This session will be conducted in Spanish. It will NOT be streamed.

Confirm

Please log in

The password is too short.

Share