Your post goes against our Community Standards (…). Our Standards are applied globally and are based on our community“.

If you receive this notification when logging into your Instagram account, an algorithm has probably decided to delete your latest trendy post, considering that it goes against Meta’s Community Standards. Many social media platforms use automated tools to moderate contents published by their users. The main tools used for moderation purposes are matching and machine learning.

Matching techniques aim to compare the digital fingerprint (hash value) of two publications. For example, in the fight against child pornography, a software (e.g., PhotoDNA) can compare the hash value of photos and videos uploaded by users with a database containing hash values of photos and videos already identified as constituting child pornography. If the software finds a match between the newly-uploaded content and known child abuse content, it will automatically delete it and send a “tip” to the competent criminal justice authorities. Social media platforms also use “supervised machine learning” tools (Meta, ML applications). It is supervised in that it uses preexisting data that has been labeled into categories (e.g., hate speech or legal speech) in order to train the algorithm to “learn” about what patterns lead to the classification of a new piece of information into each label.

The majority of illegal content posted on social media is detected and removed by technological means, without users being informed of the automation behind the decision. Automated content moderation is justified by the amount of content posted on social media and the propagation speed of that content. Automation also helps identifying the most offensive content (rape, suicide, murder, cruelty towards animals, etc.) so that human moderators are only exposed to publications that are less harmful for their mental health.

Despite its many advantages, technology is so far unable to interpret the context in which a publication is made, as well as the eventual sarcastic, political or artistic character behind it. In many cases, the use of automation can lead to absurd results. For example, Facebook was deleting for a long time artistic or historical images showing nudity, such as the Gustave Courbet’s “Les Origines du Monde” or the photography of “The Napalm Girl” made by Nick Ut during the Vietnam war. Facebook has since changed its policy on nudity, now allowing the publication of artworks illustrating naked characters.

The results obtained by predictive machine learning algorithms reflect the characteristics of the dataset used to feed them. As the computer science concept “garbage in garbage out” (GIGO) expresses, if the data is biased, the result obtained by the algorithm will also be biased, which can lead to discriminatory results. The algorithm’s opacity (“black box” effect) also makes it difficult – if not impossible – to understand the result obtained by the algorithm.

In October 2020, Instagram’s automated systems removed an image depicting various symptoms of breast cancer, published as part of the “Pink October” campaign. The user behind the publication made an appeal against the decision of removal before the Meta Oversight Board, which is the “Supreme Court” of Facebook and Instagram. In this case, the Board found that the content removal violated the freedom of expression of the user concerned (art. 19 ICCPR), as well as his right to effective remedy (art. 2 ICCPR). According to the Board, deleting a post by automated means without any human supervision raises concerns regarding users’ fundamental rights. When automated means are used by Meta to identify, filter and remove information, users should be informed of the automation behind the decision in order to be able to request a review of the decision by a human moderator (Decision 2020-004-IG-UA). The Oversight Board thus recognizes a “right to a human decision”, whereby social media platforms shall ensure that users can appeal decisions taken by automated systems to human review.

The European Digital Services Act (DSA), adopted in October 2022, will impose specific obligations on online platforms regarding their content moderation activities. Users will have to be notified of the automation behind a decision of content removal (art. 16 § 6 and 17 § 3 let. c DSA) and be able to appeal the decision before human moderators (art. 20 § 6 DSA). Decisions – automated or not – must be motivated both factually and legally (art. 17 § 3 DSA). In the field of copyright protection in the European digital market, online content-sharing platforms shall already ensure that decisions to remove content is subject to human review (art.17 § 9 of the European Directive on copyright and related rights in the Digital Single Market).

The right to a human decision is not intended to prohibit the use of technology. Automation is still essential for filtering the ocean of illegal content posted on social media platforms. However, if online speech was regulated solely by algorithms, this would lead to a distortion of public debate and a censorship of users. As the aforementioned “Pink October” case demonstrates, technology is not yet ready to make decisions without any human supervision. The right to a human decision thus allows social media platforms to use and further develop automated tools for moderation purposes, while protecting the users’ freedom of speech online.

Author(s) of this blog post

leonel.constantino@unine.ch | Web page | Other publications

Assistant-doctorant en droit international privé et droit des successions à l'Université de Neuchâtel, intéressé par les enjeux juridiques liés à numérisation (blockchain, intelligence artificielle, résolution des litiges en ligne, digitalisation de la justice étatique).