Google hopes artificial intelligence can turn search into conversation

[ad_1]

Google often uses Its annual developer conference I/O will show artificial intelligence The amazing factor. In 2016, it launched Google homepage Google Smart Speaker assistant. In 2018, Duplex The first appearance was to answer phone calls and arrange appointments for companies. In accordance with this tradition, last month CEO Sundar Pichai introduced LaMDA, artificial intelligence “designed to conduct conversations on any topic.”

In the stage demonstration, Pichai showed the feeling of talking with paper airplanes and Pluto. For each query, LaMDA will respond with three to four sentences, similar to a natural conversation between two people. Pichai said that over time, LaMDA may be integrated into Google’s products, including Assistant, Workspace, and most importantly, search for.

“We believe that LaMDA’s natural dialogue capabilities have the potential to make information and calculations fundamentally easier to access and use,” Pichai said.

The LaMDA demo provides a window into the vision of Google’s search, which goes beyond a list of links and may change the way billions of people search the web. This vision is centered on artificial intelligence, which can infer meaning from human language, participate in dialogue, and answer many questions like an expert.

Also at the I/O conference, Google introduced another artificial intelligence tool, called the Multi-Task Unified Model (MUM), which can consider searching using text and images. Vice President Prabhakar Raghavan said that one day users can take a picture of a pair of shoes and then ask on a search engine whether the shoes are suitable for wearing when climbing Mount Fuji.

MUM can generate results in 75 languages, and Google claims this allows it to understand the world more comprehensively. The demo on stage showed how MUM responded to the search query “I have hiked the mountain. Adams, now I want to hike the mountain. Next fall Fuji, what should I do?” The wording of the search query is similar to what you might search on Google today The expression is different because MUM is designed to reduce the number of searches required to find the answer. MUM can either summarize or generate text; it will know to compare Mount Adams to Mount Fuji, and travel preparation may require search results about fitness training, hiking equipment recommendations, and weather forecasts.

In an article entitled “Rethinking search: Let amateurs become experts“Published last month, four engineers at Google Research envisioned search as a dialogue with human experts. An example in the paper considered the search “What are the health benefits and risks of red wine?” Today, Google replied with a list of key points. The paper indicates that future responses may look more like a paragraph saying that red wine promotes cardiovascular health but stains your teeth, with a complete reference and link to the source of information The paper displays the response as text, but it is also easy to imagine a verbal response, just like the experience of using Google Assistant today.

But relying more on artificial intelligence to break the text is also risky, because computers still have difficulty understanding all the complexities of language. The most advanced artificial intelligence used for tasks such as generating text or answering questions, known as large-scale language models, has shown a tendency to amplify bias and generate unpredictable or toxic text. One such model, Open artificial intelligenceGPT-3, which has been used to create Interactive story of animated characters But there are Generated text on sex scenes involving children In online games.

As part of the thesis and Demo Researchers from MIT, Intel, and Facebook published a study online last year and found that large language models exhibit biases based on stereotypes about race, gender, religion, and occupation.

Rachael Tatman, a linguist with a doctorate in natural language processing ethics, said that as the text generated by these models becomes more and more convincing, it can make people believe that they are talking to artificial intelligence to generate-in fact, it is There is no common-sense understanding of the world.This can be a problem when it generates text that is harmful to the following Handicapped or Muslim Or tell people suicideWhen he grew up, Tatman recalled being taught by a librarian how to judge the validity of Google search results. She said that if Google combines large-scale language models with search, users will have to learn how to evaluate conversations with expert AI.

[ad_2]

Source link