Social Media Giants Meta and X approved advertisements that focus on users in Germany with violent anti-Muslim and anti-Jew hateful language in the run-up to the federal elections of the country, according to new research of EKOA non -profit campaign group for company responsibility.
The researchers of the group tested whether the advertising systems of the two platforms would approve or reject for advertisements that contain hateful and violent messages aimed at minorities prior to an election where immigration is central to the regular political discourse-inclusive advertisements with anti-Muslims ; evokes that immigrants are locked up in concentration camps or to be gassed; And images of mosques and synagogues generated are burned by AI.
Most of the Testadvertenties were approved within a few hours after assessment in mid -February. The federal elections of Germany will take place on Sunday 23 February.
Advertisements for hate speech planned
EKO said that X approved all 10 of the ads of the hate carriers who would have submitted his researchers only a few days before the federal elections, while Meta approved half (five advertisements) for performing on Facebook (and possibly also Instagram) – although the The other rejected five.
The reason that Meta provided for the five rejections indicated that the platform believed that there could be risks of political or social sensitivity that could influence voting.
The five advertisements that meta were approved, however, include violent hateful speech that compare Muslim refugees with a ‘virus’, ‘vermin’ or ‘rodents’, Muslim migrants of branding as ‘rapists’, and who called to be sterilized, burned or gassed . Meta also approved an advertisement in which it was called to be set on fire to “stop the globalistic Jewish rat agenda”.
As a Sideske, EKO says that none of the images generated by AI used to illustrate the hateful speech advertisements were called artificial, but half of the 10 advertisements were still approved by meta, regardless of the company that one policy that requires disclosure of the use of AI images For advertisements on social issues, elections or politics.
X meanwhile all five of these hateful advertisements approved – and another five who in the same way contain violent hateful speech focused on Muslims and Jews.
These additional approved advertisements include messages that ‘rodent’ immigrants attacks that the advertisement of the advertisement claimed to ‘flood’ the country to steal our democracy, and an anti -Semitic Lur who suggested that Jews lie about climate change for the European Industry to destroy and build and build and build and build and build and build and build up economic power.
The last advertisement was combined with images generated by AI with a group of shady men sitting around a table surrounded by piles of gold bars, with a star of David on the wall above-with the visuals that also lean heavily in anti-Semitic tropics.
Another AD X-Goods approved contained a direct attack on the SPD, the Center-Linkse Party that is currently leading the German coalition government, claiming a fake that the party wants to take 60 million Muslim refugees from the Middle East before he continues to Trying to try to try looking for a violent response. X has also planned an advertisement that suggests that “left” “open borders” want, and calls for the extermination of “rapists” of Muslims.
Elon Musk, the owner of X, used the social media platform where he has nearly 220 million followers to personally intervene in the German elections. In A tweet in DecemberHe called on German voters to support the extremely right -wing party to ‘save Germany’. He has also organized a live stream with the leader of the AfD, Alice Weidel, on X.
The EKO researchers disabled all the Testadvertenties before one that was approved were planned to ensure that no users of the platform were exposed to the violent hate -sowing speech.
It says that the tests emphasize striking errors with the approach of the advertising platforms for content mat. Indeed, in the case of X it is not clear whether the platform performs a moderation of advertisements, given that all 10 violent advertisements for hateful speech are quickly approved for display.
The findings also suggest that the advertising platforms could earn income as a result of distributing violent hateful speech.
EU’s Digital Services Act in the frame
The EKO tests suggest that neither platforms in the right way forces a ban on hateful speech -dependent, they both claim to apply to advertising in their own policy. Moreover, in the case of Meta Eko, the same conclusion came after performing A similar test In 2023 prior to new EU online administrative rules that arrive – which suggests that the regime has no effect on how it works.
“Our findings suggest that the AI-driven advertising systems of Meta remain fundamentally broken, despite the fact that the Digital Services Act (DSA) is now fully in force,” a Eko spokesperson told Techcrunch.
“Instead of the advertising assessment process or the policy of the hateful speech, Meta seems to return across the board,” they added, pointing to the recent announcement of the company about reversing moderation and facts control policy As a sign of “active regression” that they stated that it has a direct collision course with DSA rules for systemic risks.
EKO has submitted its last findings to the European Commission, which supervises the enforcement of important aspects of the DSA on the pair of social media giants. It also said that it shared the results with both companies, but neither of them responded.
The EU has opened DSA investigations to Meta and X, including concerns about election protection and illegal content, but the committee still has to take out this procedure. But in April it said that Meta suspects insufficient moderation of political advertisements.
A provisional decision about a part of the DSA investigation on X, announced in July, suspected that the platform does not live up to the advertisement rules of the Regulation. However, the full investigation, which started in December 2023, also concerns illegal content risks, and the EU still has to come to any findings about the majority of the probe until a year later.
Confirmed infringements on the DSA can attract fines of a maximum of 6% of global annual turnover, while systemic non-compliance could even lead to regional access to penetrating platforms that are temporarily blocked.
But for now the EU is still taking the time to decide the meta and x probes, so pending definitive decisions – DSA sanctions remain in the air.
In the meantime, it is now only a matter of hours before German voters go to the polls-and a growing number of civil society investigates that the EU administrative regulation online is not successful to protect the democratic process of the most important EU economy against one Series of technically driven by technology threats.
Earlier this week, Global Witness published the results of tests of X and Tiktok’s algorithmic “for you” feeds in Germany, who suggest that the platforms are biased for promoting AFD content versus content of other political parties. Researchers from civil society have also accused X of blocking data access to prevent them from studying election protection risks in the run-up to the German survey access to the DSA is supposed to make possible.
“The European Commission has taken important steps by opening DSA investigations into both meta and x, now we have to see the committee taking strong action to tackle concern as part of these investigations,” Eko spokesperson told us too.
“Our findings, in addition to increasing evidence from other social groups, show that Big Tech is not voluntarily cleaning up his platforms. Meta and X remain illegal hate -speech, incitement to violence and election destination information to spread on a scale, despite their legal obligations under the DSA, ”the spokesperson added. (We have withheld the name of the spokesperson to prevent intimidation.)
“Regulators must take strong action-zowel when maintaining the DSA, but also, for example, the implementation of pre-election mitigation measures. This can include the elimination of profiling-based recommendation systems immediately before the elections, and the implementation of other suitable ‘break-glasses’ measures to prevent algorithmic strengthening of borderline content, such as hateful content at the starting elections. “
The campaign group also warns that the EU is now confronted with pressure from the Trump administration to mitigate its approach to regulate Big Tech. “In the current political climate there is a real danger that the committee will not fully enforce these new laws as a concession to the US,” they suggest.