The Pentagon moves toward letting AI control weapons

[ad_1]

Last August A dozen military Drone Same as tank robot Come to the sky and road 40 miles south of Seattle. Their mission: to find terrorists suspected of hiding in several buildings.

So many robots are involved in the operation, so no human operator can pay close attention to all robots. Therefore, they were instructed to find and eliminate enemy combatants when necessary.

The mission is just an exercise, by Defense Advanced Research Projects AgencyThe Blue Sky Research Department of the Pentagon; these robots are equipped with lethal weapons, and their radio transmitters are designed to simulate interaction with friendly and enemy robots.

The exercise was one of several tests conducted last summer to test artificial intelligence It can help expand the use of automation in military systems, including scenarios that are too complex for humans to make important decisions quickly. These demonstrations also reflect a subtle change in the Pentagon’s thinking about automatic weapons, because machines can outperform humans when analyzing complex situations or operating at high speeds.

General John Murray The U.S. Army Future Command Command told the audience at the U.S. Military Academy last month that swarms of robots will force military planners, policymakers, and society to think about whether one should use lethal force in the new autonomous system. Make all decisions. Murray asked: “Within one person’s ability, find out who must participate, and then make 100 personal decisions. Is this within the scope of ability? “Is it even necessary to involve one person? “He added.

Other comments by military commanders indicate an interest in giving more agency power to automatic weapon systems. At the Air Force AI conference held last week, Michael Kanaan, the operational director of the MIT Air Force Artificial Intelligence Accelerator and the leading voice of the US military on AI, said that ideas are constantly evolving. He said that artificial intelligence should identify and distinguish more potential targets while humans make high-level decisions. Kananan said: “I think this is where we are going.”

In the same incident, the Lieutenant General Clinton HaynotThe Pentagon’s deputy chief of staff in charge of strategy, integration and demand said whether one can be rescued from a deadly autonomous system. “This is one of the most interesting debates coming up. [and] Not resolved yet. “

A report This month, the National Artificial Intelligence Security Council (NSCAI), an advisory group established by Congress, made recommendations that, among other things, the United States should resist calls for an international ban on the development of automatic weapons.

Bell (Timothy Chung)The Darpa project manager in charge of the cluster project said that the exercise last summer was designed to explore when drone operators should and should not make decisions for autonomous systems. For example, in the face of multiple attacks, human control can sometimes hinder the execution of tasks because people cannot respond quickly enough. Chung said: “In fact, these systems can do better without human intervention.”

UAVs and wheeled robots (each equivalent to the size of a large backpack) are given an overall goal, and then AI algorithms are used to design a plan to achieve that goal. Some of them surrounded the building, while others conducted surveillance and raids. A few people were destroyed by simulated explosives; certain beacons represented enemy combatants and chose to attack.

The United States and other countries have used autonomy in weapon systems for decades. For example, certain missiles can autonomously identify and attack enemies in a given area. But the rapid development of AI algorithms will change the way the military uses such systems. Ready-made AI codes that can control robots and identify landmarks and targets (often with high reliability) will make it possible to deploy more systems in a wider range of situations.

However, as the drone demonstration emphasizes, the wider use of artificial intelligence can sometimes make it more difficult for people to get into trouble.This may prove to be problematic because AI technology May be biased or exhibit unexpected behavior. Visual algorithms trained to recognize specific uniforms may mistakenly target people wearing similar clothing. Chung said that the cluster project assumes that the AI ​​algorithm will be improved to the point where it can identify enemies with sufficient credibility.

[ad_2]

Source link