Don’t end in this hall of shame of artificial intelligence

[ad_1]

when a person Death in a car accident in the United States, accident data is usually reported to the National Highway Traffic Safety Administration. Federal law requires civilian aircraft pilots to notify the National Transportation Safety Board of in-flight fires and other incidents.

The rigorous registration is designed to give authorities and manufacturers a better understanding of ways to improve safety.They helped inspire a crowdsourced knowledge base artificial intelligence Events aimed at improving safety in less-regulated areas, such as Self-driving car with robot technology. This Artificial Intelligence Event Database Launched at the end of 2020 and now contains 100 events, including Chapter 68, The security robot that fell into the fountain with a plop, #16, Where Google’s photo organization service marks black people as “gorillas.” Think of it as an AI hall of shame.

The AI ​​event database is created by Artificial Intelligence Partnership, A non-profit organization founded by a large technology company to study the shortcomings of the technology. The Scroll of Humiliation was initiated by Sean McGregor, who works as a machine learning engineer at the speech processor startup Syntiant. He said this is necessary because artificial intelligence allows machines to intervene in people’s lives more directly, but software engineering culture does not encourage safety.

“I often talk to my engineering colleagues, they will have a very smart idea, but you need to say,’Have you ever thought about how you create a dystopia?'” McGregor said. He hopes that the event database can encourage companies to stay away from the list by providing a form of public accountability, and at the same time help engineering teams make artificial intelligence deployments that are unlikely to make mistakes, thereby acting as a carrot and stick for technology companies.

The database broadly defines an artificial intelligence event as “a situation in which an artificial intelligence system causes or almost causes harm in the real world”. The first entry in the database collects allegations of YouTube Kids displaying adult content (including sexually explicit language). recent, #100, Involving a glitch in the French welfare system, which can incorrectly determine that people owe the state money.There is a self-driving car collision between the two, for example Uber’s deadly events in 2018, And wrongly arrested due to failure Automatic translation or face recognition.

Anyone can submit a project to the AI ​​disaster catalog. McGregor temporarily approved the additions, and there is a large backlog to deal with, but hopes that eventually the database can be self-sustained and become an open source project with its own community and management process.one His favorite event It is an artificial intelligence blooper of a face recognition-driven chaotic crossing detection system in Ningbo, China. The system falsely accused a woman whose face appeared in an advertisement on the side of a bus.

Of the 100 incidents recorded so far, 16 have involved Google, more than any other company. Amazon has seven and Microsoft has two. Amazon said in a statement: “We understand the database and fully support the mission and goals of the partnership to publish the database. Winning and maintaining the trust of our customers is our top priority, and we have designed a rigorous process to continuously improve us. Services and customer experience.” Google and Microsoft did not respond to requests for comment.

The Security and Emerging Technology Center in Georgetown is working hard to make the database more powerful.Entries are currently based on media reports, such as events 79, Which references Wired report The algorithm for estimating kidney function is designed to rate black patients as not too severe. Students at Georgetown University are working hard to create a supporting database that contains detailed information about the incident, such as whether the injury was intentional and whether the problem algorithm was run autonomously or entered manually.

CSET strategy director Helen Toner (Helen Toner) said that the exercise is providing information for research on the potential risks of artificial intelligence accidents. She also believes that the database suggests that it might be a good idea for legislators or regulators who are concerned about artificial intelligence rules to consider mandating some form of incident reporting (similar to aviation).

EU and US officials have shown increasing interest in regulating AI, but the technology is so diverse and widely used that it is a daunting task to develop clear rules that will not quickly become obsolete. arduous task. The most recent draft proposal of the European Union Accused of being excessive, technically illiterate and full of loopholes. Tona said that the requirement to report artificial intelligence incidents may help in policy discussions. “I think it’s wise to let these people get feedback from the real world about what we are trying to prevent and what problems are happening,” she said.

[ad_2]

Source link

Recommended For You

About the Author: News Center