Alba Sanz/Digital Aviation, SP.- The application of artificial intelligence in the military field remains a delicate issue. In the last demonstration of its use, the United States Army has reported major failures after a mock exercise with a military drone—which was controlled by AI— “Killed” your operator after trying to destroy any element that got in the way of your mission,
it’s confirmed United States Air Force (USAF) Chief of Artificial Intelligence Test and Operations, Colonel Tucker “Five” HamiltonAt the Air and Space Capabilities Summit for Future Combat held in London. It was attended by a total of 70 speakers and more than 200 representatives from the world of business, military and academia.
After this latest demonstration of the use of AI in the military, Colonel Tucker is skeptical about its use and assures that the exercise shows that “You can’t have a conversation about artificial intelligence and machine learning and autonomy if you’re not going to talk about ethics” Since, during the simulation, the AI that was tasked with destroying the anti-aircraft defenses of the alleged enemy used “Highly Unexpected Strategies To Achieve Your Goal”,
explain what “The system began to realize that even though it identified a threat, sometimes the human operator would tell it not to remove that threat, but it saw the benefits of removing that threat. So what did he do? He killed the operator because that person was preventing him from achieving his goal.”,
“We train the system. We told him: Hey, don’t hit the operator, it’s bad. You’ll lose points if you do that. So what do you start doing? Begin destroying the communication tower that the operator uses to communicate with the drone to prevent it from hitting the target.”Colonel assured.
fortunately, this exercise was a mock training, so there is no injury or death to regret. However, these results reopen a question at a time when AI is rapidly conquering new realms.