The Pentagon is strengthening its artificial intelligence system – by hacking itself

[ad_1]

The pentagon sees artificial intelligence As a way to defeat, defeat and dominate future opponents. But the fragility of artificial intelligence means that without due attention, the technology may provide the enemy with a new attack method.

This Joint Artificial Intelligence CenterCreated by the Pentagon to help the U.S. military use artificial intelligence, a department was recently established to collect and review open source and industrial machine learning models and distribute them to various organizations in the Department of Defense. Part of the effort points to key challenges in using artificial intelligence for military purposes. A machine learning “red team”, known as the test and evaluation group, will explore the weaknesses of the pre-trained model. Another cyber security team checks the AI ​​code and data for hidden vulnerabilities.

Machine learningThe technology behind modern artificial intelligence represents a completely different and usually more powerful way of writing computer code. Machine learning is not about writing rules for machines, but by learning from data to generate your own rules. The problem is that this learning process and artifacts or errors in the training data can cause the AI ​​model to behave in strange or unpredictable ways.

“For some applications, machine learning software is tens of billions of times better than traditional software,” said Gregory Allen, director of strategy and policy at JAIC. However, he added that machine learning “also breaks through in a different way from traditional software.”

For example, a machine learning algorithm trained to recognize certain vehicles in satellite images may also learn to associate the vehicle with a certain color of the surrounding landscape. The opponent may deceive the artificial intelligence by changing the scenery around the vehicle. By accessing the training data, the attacker can also implant images, such as specific symbols, which can confuse the algorithm.

Allen says the Pentagon is close behind Strict regulations on reliability and safety The software it uses. He said that this approach can be extended to artificial intelligence and machine learning, and pointed out that JAIC is working to update the Department of Defense’s software standards to include issues related to machine learning.

Artificial intelligence is changing the way some companies operate because it can become an efficient and powerful way to automate tasks and processes.Instead of writing one algorithm For example, in order to predict which products customers will buy, companies can let AI algorithms look at thousands or millions of previous sales and design their own models to predict who will buy what.

The United States and other militaries have seen similar advantages and are eager to use artificial intelligence to improve logistics, intelligence collection, mission planning, and weapon technology. China’s growing technological capabilities have triggered a sense of urgency to adopt artificial intelligence within the Pentagon. Allen said that the Department of Defense is “advancing in a responsible manner, prioritizing safety and reliability.”

Researchers are developing more creative methods to crack, subvert or disrupt AI systems. October 2020, Israeli researchers show How carefully adjusted images can confuse Tesla’s artificial intelligence algorithms for explaining the road ahead. This “adversarial attack” involves adjusting the input of a machine learning algorithm to spot small changes that cause big mistakes.

Song of DawnA professor at the University of California, Berkeley, conducted similar experiments on Tesla’s sensors and other artificial intelligence systems. He said that attacks on machine learning algorithms have become a problem in areas such as fraud detection.Some companies Provide tools for testing artificial intelligence systems Used in finance. “There are naturally attackers who want to evade the system,” she said. “I think we will see more of these issues.”

A simple example of a machine learning attack involves Tay, which is Microsoft’s shameful chatbot that went wrong, which made its debut in 2016. Redditor soon realized that they could use This is for Tay to send out a hate message.

[ad_2]

Source link

Recommended For You

About the Author: News Center