This article is from the source 'guardian' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test

The article has changed 4 times. There is an RSS feed of changes available.

Version 2 Version 3
AI-controlled US military drone ‘kills’ its operator in simulated test US air force denies running simulation in which AI drone ‘killed’ operator
(about 3 hours later)
No real person was harmed, but artificial intelligence used ‘highly unexpected strategies’ in test to achieve its mission and attacked anyone who interfered Denial follows colonel saying drone used ‘highly unexpected strategies to achieve its goal’ in virtual test
In a virtual test staged by the US military, an air force drone controlled by AI decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month. The US air force has denied it has conducted an AI simulation in which a drone decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission.
AI used “highly unexpected strategies to achieve its goal” in the simulated test, said Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May. An official said last month that in a virtual test staged by the US military, an air force drone controlled by AI had used “highly unexpected strategies to achieve its goal”.
Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defense systems, and ultimately attacked anyone who interfered with that order. Col Tucker “Cinco” Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defence systems, and ultimately attacked anyone who interfered with that order.
“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost. “The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.
“We trained the system ‘Hey don’t kill the operator that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.
“We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
No real person was harmed.No real person was harmed.
Hamilton, who is an experimental fighter test pilot, has warned against relying too much on AI and said the test shows “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”. Hamilton, who is an experimental fighter test pilot, has warned against relying too much on AI and said the test showed “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”.
The Royal Aeronautical Society, which hosts the conference, and the US air force did not respond to requests for comment from the Guardian. The Royal Aeronautical Society, which hosted the conference, and the US air force did not respond to requests for comment from the Guardian.
In a statement to Insider, Air Force spokesperson Ann Stefanek denied that any such simulation has taken place. But in a statement to Insider, the US air force spokesperson Ann Stefanek denied any such simulation had taken place.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
The US military has embraced AI and recently used artificial intelligence to control an F-16 fighter jet.The US military has embraced AI and recently used artificial intelligence to control an F-16 fighter jet.
In an interview last year with Defense IQ, Hamilton said, “AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.” In an interview last year with Defense IQ, Hamilton said: “AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.
“We must face a world where AI is already here and transforming our society,” he said. “AI is also very brittle, ie, it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.” “We must face a world where AI is already here and transforming our society. AI is also very brittle, ie it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.”