The most interesting aspect of this week’s announcement that Coimisiún na Meán has launched a formal investigation into Elon Musk’s X over its compliance, or lack thereof, with European legislation was the brief note that the regulator’s suspicions were stoked by a German NGO, HateAid.
The investigation, which will look into whether X’s reporting and appeals mechanisms are to the EU’s liking, essentially, arises in response to “concerns” held by Coimisiún na Meán’s Platform Supervision Team, concerns that were “supplemented by information provided by an NGO, HateAid and a user complaint”.
While there’s not much that we can do currently to glean more information about the user complaint in question, fortunately for interested inquirers, HateAid is a major – controversial – non-governmental organisation at work for some years now in Germany combating what it refers to as “digital violence,” and so there’s much that can be learned about it.
Where better to turn to first than its own words regarding its work and modus operandi: “HateAid is a non-profit organisation that promotes human rights in digital space and stands up against digital violence and its consequences at both social and political levels”. “Digital violence” is an interesting term, and one that we’ll come back to shortly.
HateAid provides direct counselling and legal support in cases of so-called “digital violence,” and engages in awareness raising about “problems” in politics and society. It does all of this in the pursuit of one goal, apparently: “an internet that allows freedom of speech and participation”.
Now, that might seem somewhat paradoxical, given they’re primarily interested in empowering people to combat things they see online, but it’s all in keeping with that phrase “digital violence”.
HateAid “demands” that digital violence be “recognised for what it is: violence against people”.
This is a move not dissimilar to that pulled by other leftwing organisations, an attempt to equate all manner of things with violence, and then paradoxically, to equate their opposites with violence, too. “Speech is violence,” we heard, especially in the wake of Charlie Kirk’s murder. “Silence is violence,” we heard, in the wake of George Floyd’s killing, and on into the present day in the context of Israel and Gaza.
It is a tactic with multiple effects, but in the case of HateAid and its mission, it primarily seeks to see hate, violent words, misinformation, disinformation, whatever other phrase you’d like to apply to non-approved communication dealt with more seriously than it currently is. Dealt with as seriously as a physical transgression.
Because they really do paint unpleasantness on the internet as being just that, deadly serious.
“All those affected who don’t spread digital violence themselves can turn to HateAid. If they wish, they can first receive an emotionally stabilising initial counselling session. If necessary, there can be further consultations with trained counsellors,” HateAid’s website informs visitors.
Those visitors who do engage in digital violence just have to put up with the fact that they’ve chosen to live by the digital sword, and content themselves with the consequences of that. No emotionally stabilising counselling session for them.
As has already been alluded to, the ever-nebulous concepts of “hate” and “systemic disinformation” are also heavily relied upon by HateAid, which argues that as a result of their propagation online, freedom of speech is under more pressure than ever before.
HateAid receives what would appear to be substantial federal funding from the German government. The NGO lists two German federal ministries among its sponsors on its website, and a report from outlet Nius indicates that since its founding in 2019, the organisation has received €4.7 million in state funding, provided of course by the German taxpayer, willingly or unwillingly.
You get the gist of the kind of organisation HateAid is, anyway. Its relevance to this present investigation, though, presumably lies in its status as a “trusted flagger” under Europe’s DSA legislation.
Trusted flaggers are “special entities” under the DSA, described as “experts at detecting certain types of illegal content online, such as hate speech or terrorist content, and notifying it to the online platforms”.
“The notices submitted by them must be treated with priority as they are expected to be more accurate than notices submitted by an average user,” the European Commission website reads.
Thanks to the complex framework the EU is building around content moderation online, replete with trusted flaggers, such as HateAid, and Digital Services Coordinators, such as Coimisiún na Meán, the collaboration between organisations across the continent is more streamlined than ever, making it all too easy for an organisation like HateAid to raise concerns in jurisdictions it would otherwise have nothing to do with.
But then, Germany does have something of a culture of exporting censorship in recent years. The infamous NetzDG law has received extensive criticism since its introduction in 2017, which was something of a forerunner to the DSA, requiring large social media platforms to remove illegal content flagged by German users or face hefty penalties.
It will not be work without its challenges though for HateAid and its flagging colleagues, with lawyers, politicians and journalists in Germany coming forward to criticise the organisations for, as they say, confusing legitimate opinion with hate speech, and in doing so contributing not to the flourishing of free speech, but of censorship.
All the same, expect to see more collaboration between Coimisiún na Meán and its European partners in future if there are no significant amendments to the DSA on the horizon – which there doesn’t appear to be.