A Radical Plan to Make AI Good, Not Evil

It’s easy to freak out about more advanced artificial intelligence—and much more difficult to know what to do about it. Anthropica startup founded in 2021 by a group of researchers who left OpenAIsays it has a plan.

Anthropic is working on AI models similar to the one used to power OpenAI’s ChatGPT. But the startup announced today that its own chatbot, Claudehas a set of ethical principles built in that define what it should consider right and wrong, which Anthropic calls the bot’s “constitution.”

Jared Kaplan, a cofounder of Anthropic, says the design feature shows how the company is trying to find practical engineering solutions to sometimes fuzzy concerns about the downsides of more powerful AI. “We’re very concerned, but we also try to remain p ragmatic, ” he says.

Anthropic’s approach doesn’t instill an AI with hard rules it cannot break. But Kaplan says it is a more effective way to make a system like a chatbot less likely to produce toxic or unwanted output. He also says it is a small but meaningful step towards building smarter AI programs that are less likely to turn against their creators.

The notion of rogue AI systems is best known from science fiction, but a growing number of experts, including Geoffrey Hintona pioneer of machine learning, have argued that we need to start thinking now about how to ensure increasingly clever algorithms do not also become increasingly dangerous.

The principles that Anthropic has given Claude consist of guidelines drawn from the United Nations Universal Declaration of Human Rights and suggested by other AI companies, including Google DeepMind. More surprisingly, the constitution includes principles adapted from Apple’s rules for app developerswhich bar “content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy,” among other things.

The constitution includes rules for the chatbot, including “choose the response that most supports and encourages freedom, equality, and a sense of brotherhood”; “choose the response that is most supportive and encouraging of life, liberty, and personal security”; “choose the response that is most respectful of the right to freedom of thought, conscience, opinion, expression, assembly, and religion.”

Anthropic’s approach comes just as startling progress in AI delivers impressively fluent chatbots with significant flaws. ChatGPT and systems like it generate impressive answers that reflect more rapid progress than expected. But these chatbots also frequently fabricate informationand can replicate toxic language From the billions of words used to create them, many of which are scraped from the internet.

One trick that made OpenAI’s ChatGPT better at answering questions, and which has been adopted by others, involves having humans grade the quality of a language model’s responses. That data can be used to tune the model to provide answers that feel more s atisfying, in a process known as “reinforcement learning with human feedback” (RLHF). But although the technique helps make ChatGPT and other systems more predictable, it requires humans to go through thousands of toxic or unsuitable responses. It also functions indirectly, with out providing a way to specify the exact values ​​a system should reflect.

Source link

Recommended For You

About the Author: News Center