User experience obscures the anti-racism policies of social networks | Race

[ad_1]

The world’s largest social network said that racism is not welcome on their platforms, but the combination of weak law enforcement and weak rules makes hate prevalent.

Within hours of England’s loss to Italy in the European Football Championship, both Twitter and Facebook have Instagram, Issued a statement condemning the inflated racist abuse.

The social network said on Monday morning: “The abhorrent racist abuse against England players last night has absolutely no place on Twitter.” One Facebook The spokesperson also said: “No one should be subjected to racist abuse anywhere, and we don’t want it to appear on Instagram.”

But these statements have little to do with the company’s user experience. On Instagram, thousands of people commented on the pages of Marcus Rashford, Bukayo Saka, and Jadon Sancho, and support users who tried to report abuse to the platform were surprised by the reaction.

“Because we receive a large number of reports, our review team cannot review your report,” many users were told that day. “However, our technology found that the post may not violate our community guidelines.” Instead, they were advised to personally block users who posted abusive behavior, or mute the phrase so that they could not see them.

It is undeniable that these posts are racist, blaming the players for failing to score in penalty shoot-outs, or posting monkey or banana emojis, but automatic moderators have decided otherwise-there is no obvious way to do so. Attract and force people to pay attention to this matter.

Facebook now says that this moderation is wrong. In fact, in the training documents sent to Instagram moderators seen by The Guardian, monkey and banana emojis were clearly listed as examples of “dehumanized hate speech” prohibited on the platform.

“It is absolutely impossible to send racist emojis or any type of hate speech on the Internet. [the platform]”, Instagram boss Adam Mosseri (Adam Mosseri) said“Suggesting otherwise is deliberately misleading and sensational.”

Facebook’s definition of hate speech is broad, covering “violent or dehumanizing speech, harmful stereotypes, inferiority speech, contempt, disgust or dismissal, curses, and expressions that require exclusion or isolation.”

However, Twitter’s views are more narrow, and only prohibits hate speech that may “violently attack or threaten others based on race” or other protected characteristics. Users may be punished for “repeating defamation, metaphors, or other content intended to dehumanize, belittle or reinforce negative or harmful stereotypes about protected categories,” the company’s statement said. Public rules status.

Sunder Katwala, Director of the British Future Think Tank Point out Some clearly racist information does not belong to the guidelines. Katwala was told, for example, “There are no blacks in the England team-keep our team white”, and “Marcus Rashford is not British-blacks cannot be British” on Twitter. Allowed.

“What are you Can’t carry on Twitter It is a group that is “dehumanized”—for example, through faith or race,” Catwara added. “’Blacks are pests—to expel them all’ has been in violation of regulations since December 2020, after 18 months of Pressure, this should apply to race and belief! “

Some newer social networks are seeking to avoid the traps of their old competitors. DouyinFor example, there is a “hateful behavior” policy that directly prohibits “all defamation unless these terms are reappropriated… or not belittled” and “hate ideology” is prohibited. The video-sharing platform has long taken the position that its American counterparts overemphasize the abstract ideal of freedom of speech and oppose the establishment of a pleasant community.

Twitter and Instagram did not respond to multiple requests for comment from the Guardian.



[ad_2]

Source link