Since machines can learn, can they forget it?

[ad_1]

All companies Type of use Machine learning Analyze people’s desires, dislikes, or faces. Some researchers are now asking a different question: How do we make machines forget?

The emerging field of computer science is called the machine forget Looking for ways to induce selective amnesia artificial intelligence software. The goal is to remove all traces of a specific person or data point from the machine learning system without affecting its performance.

If practical, this concept allows people to better control their data and the value generated from it. Although users can already ask some companies to delete personal data, they usually don’t know anything about the algorithms that their information helps adjust or train. Machine cancellation learning allows a person to simultaneously extract their data and the company’s ability to profit from it.

Although it is intuitive for those who regret what they share online, the concept of artificial amnesia requires some new ideas in computer science. Companies spend millions of dollars to train machine learning algorithms to recognize faces or rank social posts, because these algorithms can usually solve problems faster than a single human coder. But once trained, the machine learning system will not change easily. Even understandThe traditional way to eliminate the impact of specific data points is to rebuild the system from scratch, which can be a costly task. “This research aims to find some middle ground,” said Aaron Ross, a professor of machine learning at the University of Pennsylvania. “When they ask to delete data, can we eliminate all the impact of their data while avoiding the full cost of retraining from scratch?”

Machines cancel the job of learning partly because people are increasingly concerned about the ways in which artificial intelligence can erode privacy. For a long time, data regulators around the world have the power to force companies to delete information about unjust acts.Citizens of certain regions, such as I and California, And even have the right to ask companies to delete their data if they change what they disclose. Recently, regulators in the United States and Europe have stated that owners of artificial intelligence systems sometimes have to go one step further: delete systems that train on sensitive data.

Last year, the UK’s data regulator Warning company Certain machine learning software may be subject to GDPR rights, such as data deletion, because AI systems may contain personal data. Security researchers have proven Algorithms are sometimes forced to disclose sensitive data used in their creation.At the beginning of this year, the U.S. Federal Trade Commission Mandatory facial recognition startup Paravision Delete a set of improperly obtained face photos and machine learning algorithms trained with them. FTC Commissioner Rohit Chopra praised this new enforcement strategy as a way to force companies that violate data rules to “lost the results of their deception.”

The small field of machine cancellation learning research is working hard to solve some of the practical and mathematical problems brought about by these regulatory changes. Researchers have shown that they can make machine learning algorithms forget under certain conditions, but the technology is not yet ready for prime time. “It is very common for a young field where there is a gap between what this field aspires to do and what we now know how to do,” Ross said.

Proposed a promising method 2019 year Researchers from the University of Toronto and the University of Wisconsin-Madison split the source data of the new machine learning project into multiple parts. Then process each individually, and then combine the results into the final machine learning model. If you need to forget a data point later, you only need to reprocess a small part of the original input data.The method proved to be suitable for online purchase data and Collect more than one million photos.

Roth and collaborators from the University of Pennsylvania, Harvard University and Stanford University recent It proved a flaw in this method, indicating that if the submitted deletion request appears in a specific order, whether it is accidental or from a malicious actor, the cancellation of the learning system will collapse. They also showed how to alleviate this problem.

Gautam Kamath, a professor at the University of Waterloo, is also committed to canceling learning. He said that the problem discovered and solved by the project is an example of many unanswered questions about how to make machines cancel learning more than just the curiosity of the laboratory.His own research team has explore By continuously canceling learning multiple data points, how much the accuracy of the system is reduced.

Kamath is also looking for a way for the company to prove-or the regulator to check-that the system has really forgotten what it should have forgotten. He said: “It feels like there is still some way to go, but maybe they will eventually have auditors for this sort of thing.”

[ad_2]

Source link

Recommended For You

About the Author: News Center