[ad_1]
In today’s China, social media companies usually have proprietary lists of sensitive wordsbuilt from both government instructions and their own operational decisions. This means whatever filter ERNIE-ViLG employs is likely to differ from the ones used by Tencent-owned WeChat or by Weibo, which is operated by Sina Corporation. Some of these platforms have been systematically tested by the Toronto-based research group Citizen Lab.
Badiucao, a Chinese-Australian political cartoonist (who uses the alias for his artwork to protect his identity), was one of the first users to spot the censorship in ERNIE-ViLG. Many of his artworks directly criticize the Chinese government or its political leaders , so these were some of the first prompts he put into the model. “Of course, I was also intentionally exploring its ecosystem. Because it’s new territory, I’m curious to know whether censorship has caught up with it,” says Badiucao. “But [the result] is quite a shame.”
As an artist, Badiucao doesn’t agree with any form of moderation in these AIs, including the approach taken by DALL-E 2, because he believes he should be the one to decide what’s acceptable in his own art. But still, he cautions that censorship driven by moral concerns should not be confused with censorship for political reasons. “It’s different when an AI judges what it cannot generate based on commonly agreed-upon moral standards and when a government, as a third party, comes in and says you can’t do this because it harms the country or the national government,” he says.
The difficulty of identifying a clear line between censorship and moderation is also a result of differences between cultures and legal regimessays Giada Pistilli, principal ethicist at Hugging Face. For example, different cultures may interpret the same imagery differently. “When it comes to religious symbols, in France nothing is allowed in public, and that’s their expression of secularism,” says Pistilli. “When you go to the US, secularism means that everything, like every religious symbol, is allowed.” In January, the Chinese government proposed a new regulation banning any AI-generated content that “endangers national security and social stability,” which would cover AIs like ERNIE-ViLG.
What could help in ERNIE-ViLG’s case is for the developer to release a document explaining the moderation decisions, says Pistilli: “Is it censored because it’s the law that’s telling them to do so? Are they doing that because they believe it’s wrong? It always helps to explain our arguments, our choices.”
Despite the built-in censorship, ERNIE-ViLG will still be an important player in the development of large-scale text-to-image AIs. The emergence of AI models trained on specific language data sets makes up for some of the limitations of English -based mainstream models. It will particularly help users who need an AI that understands the Chinese language and can generate accurate images accordingly.
Just as Chinese social media platforms have thrived in spite of rigorous censorship, ERNIE-ViLG and other Chinese AI models may eventually experience the same: they’re too useful to give up.
[ad_2]
Source link