Catching bad content in the age of AI

[ad_1]

Large language models still struggle with context, which means they probably won’t be able to interpret the nuance of posts and images as well as human moderators. Scalability and specificity across different cultures also raise questions. “Do you deploy one model for any particular type of niche? Do you do it by country? Do you do it by community?… It’s not a one-size-fits-all problem,” says DiResta.

New tools for new tech

Whether generative AI ends up being more harmful or helpful to the online information sphere may, to a large extent, depend on whether tech companies can come up with good, widely adopted tools to tell us whether content is AI-generated or not.

That’s quite a technical challenge, and DiResta tells me that the detection of synthetic media is likely to be a high priority. This includes methods like digital watermarkingwhich embeds a bit of code that serves as a sort of permanent mark to flag that the attached piece of content was made by artificial intelligence. Automated tools for detecting posts generated or manipulated by AI are appealing because, unlike watermarking, they don’t require the creator of the AI-generated content to proactively label it as such. That said, current tools that try to do this have not been particularly good at identifying machine-made content.

Some companies have even proposed cryptographic signatures That use math to securely log information like how a piece of content originated, but this would rely on voluntary disclosure techniques like watermarking.

The newest version of the European Union’s AI Act, which was proposed just this week, requires companies that use generative AI to inform users when content is indeed machine-generated. We’re likely to hear much more about these sorts of emerging tools in the coming months as demand for transparency around AI-generated content increases.

What else I’m reading

  • The EU could be on the verge of banning facial recognition in public placesas well as predictive policing algorithms. If it goes through, this ban would be a major achievement for the movement against face recognition, which has lost momentum in the US in recent months.
  • On Tuesday, Sam Altman, the CEO of OpenAI, will testify to the US Congress as part of a hearing about AI oversight following a bipartisan dinner the evening before. I’m looking forward to seeing how fluent US lawmakers are in artificial intelligence and whether anything tangible comes out of the meeting, but my expectations aren’t sky high.
  • Last weekend, Chinese police arrested a man for using ChatGPT to spread fake news. China banned ChatGPT in February as part of a slate of stricter laws around the use of generative AI. This appears to be the first resulting arrest.

What I learned this week

MisinFormation is a big problem for social, but there is to be a smaller audience for it Than you might image. Researchus from the oxford Internet Mined Over 200,000 Telegram Posts and Found that Although MisinFormation Crops up a Lot, most users don’t see to go on to share it.

In their paperthey conclude that “contrary to popular received wisdom, the audience for misinformation is not a general one, but a small and active community of users.” Telegram is relatively unmoderated, but the research suggests that perhaps there is to some degree an organic, demand-driven effect that keeps bad information in check.

[ad_2]

Source link

Recommended For You

About the Author: News Center