Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’

[ad_1]

Since then, the quest to proliferate larger and larger language models has accelerated, and many of the dangers we warned about, such as outputting hateful text and disinformation en masse, continue to unfold. Just a few days ago, Meta released its “Galactica” LLM, which is purported to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” Only three days later, the public demo was taken down after researchers generated “research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil.”

This race hasn’t stopped at LLMs but has moved on to text-to-image models like OpenAI’s DALL-E and Stability AI’s Stable Diffusionmodels that take text as input and output generated images based on that text. The danger of these models include creating child pornography, perpetuating bias, reinforcing stereotypes, and spreading disinformation en masse, as reported by many researchers and journalists. However, instead of slowing down, companies are removing the few safety features they had in the quest to up each other. For instance, OpenAI had restricted the sharing of photorealistic generated faces on social media. But after newly formed startups like StabilityAI, which reportedly raised $101 million with a whopping $1 billion valuationcalled such safety measures “paternalistic,” OpenAI removed these restrictions.

With EAs founding and funding institutes, companies, think tanksand research groups in elite universities dedicated to the brand of “AI safety” popularized by OpenAIwe are poised to see more proliferation of harmful models billed as a step toward “beneficial AGI.” And the influence begins early: Effective altruists provide “community building grants” to recruit at major college campuses, with EA chapters developing curriculum and teaching classes on AI safety at elite universities like Stanford.

Just last year, Anthropicwhich is described as an “AI safety and research company” and was founded by former OpenAI vice presidents of research and safety, raised $704 millionwith most of its funding coming from EA billionaires like Talin, Muskovitz and Bankman-Fried.An upcoming workshop on “AI Safety” at NeurIPSone of the largest and most influential machine learning conferences in the world, is also advertised as being sponsored by FTX Future FundBankman-Fried’s EA-focused charity whose team resigned two weeks ago. The workshop advertises $100,000 in “best paper awards,” an amount I haven’t seen in any academic discipline.

Research priorities follow the funding, and given the large sums of money being pushed into AI in support of an ideology with billionaire adherents, it is not surprising that the field has been moving in a direction promising an “unimaginably great future” around the corner while proliferating products harming marginalized groups in the now.

We can create a technological future that serves us instead. Take, for example, Te Hiku Media, which created language technology To revitalize te reo Māori, creating a data license “based on the Māori principle of kaitiakitangaor guardianship” so that any data taken from the Māori benefits them first. Contrast this approach with that of organizations like StabilityAI, which scrapes artists’ works without their consent or attribution while purporting to build “AI for the people.” We need to liberate our imagination from the one we have been sold thus far: saving us from a hypothetical AGI apocalypse imagined by the privileged few, or the ever elusive techno-utopia promised to us by Silicon Valley elites.

[ad_2]

Source link

Recommended For You

About the Author: News Center