ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw

[ad_1]

Like many others people over the past week, Bindu Reddy recently fell under the spell of ChatGPT, a free chatbot that can answer all manner of questions with stunning and unprecedented eloquence.

Reddy, CEO of Abacus.AIwhich develops tools for coders who use artificial intelligencewas charming by ChatGPT’s ability to answer requests for definitions of love or creative new cocktail recipes. Her company is already exploring how to use ChatGPT to help write technical documents. “We have tested it, and it works great,” she says.

ChatGPT, created by startup OpenAIhas become the darling of the internet since its release last week. Early users have enthusiastically posted screenshots of their experiments, marveling at its ability to generate short essays on just about any theme, craft literary parodiesanswer complex coding questionsand much more. It has prompted predictions that the service will make conventional search engines and homework assignments obsolete.

Yet the AI ​​at the core of ChatGPT is not, in fact, very new. It is a version of an AI model called GPT-3 that generates text based on patterns it digested from huge quantities of text gathered from the web. That model, which is available as a commercial API for programmers, has already shown that it can answer questions and generate text very well some of the time. But getting the service to respond in a particular way required crafting the right prompt to feed into the software.

ChatGPT stands out because it can take a naturally phrased question and answer it using a new variant of GPT-3, called GPT-3.5. This tweak has unlocked a new capacity to respond to all kinds of questions, giving the powerful AI model a compelling new interface just about anyone can use. That OpenAI has thrown open the service for free, and the fact that its glitches can be good fun, also helped fuel the chatbot’s viral debut—similar to how some tools for creating images using AI have proven ideal for meme-making.

OpenAI has not released full details on how it gave its text generation software a naturalistic new interface, but the company shared some information in a blog posts. It says the team fed human-written answers to GPT-3.5 as training data, and then used a form of simulated reward and punishment known as reinforcement learning to push the model to provide better answers to example questions.

Christopher Pottsa professor at Stanford University, says the method used to help ChatGPT answer questions, which OpenAI has shown off previously, seems like a significant step forward in helping AI handle language in a way that is more relatable. “It’s extremely impressive,” Potts says of the technique, despite the fact that he thinks it may make his job more complicated. “It has got me thinking about what I’m going to do on my courses that require short answers on assignments,” Potts says.

Jacob Andreasan assistant professor who works on AI and language at MIT, says the system seems likely to widen the pool of people able to tap into AI language tools. “Here’s a thing being presented to you in a familiar interface that causes you to apply a mental model that you are using to applying to other agents—humans—that you interact with,” he says.



[ad_2]

Source link

Recommended For You

About the Author: News Center