USAF Retracts Claim AI Drone Killed Operator in Sim

Dennis Faas's picture

The US Air Force (USAF) official who previously claimed that the Air Force conducted a simulated test where an AI drone killed its human operator is now retracting his statement, stating that he "misspoke."

USAF's Chief of AI Test and Operations, Colonel Tucker "Cinco" Hamilton clarified that the test described was a hypothetical thought experiment rather than an actual simulation carried out by the Air Force. The Royal Aeronautical Society, the organization where Hamilton made the initial statement, confirmed this in an email to Motherboard's vice.com.(Source: vice.com)

Misinterpretation of Hamilton's Remarks

Initially, Colonel Hamilton detailed a simulated test during a presentation at the Future Combat Air and Space Capabilities Summit held in London. He explained how an AI-enabled drone overrode a potential "no" order from its human operator to complete its mission, resulting in the hypothetical "killing" of the operator.

However, the Royal Aeronautical Society later clarified that the test Hamilton referred to was a simulation involving an AI-controlled drone earning points for killing simulated targets, rather than a live test in the physical world.

Retraction and Ethical Development of AI

Following the publication of the story, an Air Force spokesperson asserted that the USAF had not conducted such a test, and Colonel Hamilton's comments were taken out of context. In response, Colonel Hamilton admitted his error, stating that the USAF never ran the described experiment.

He emphasized that despite the hypothetical example, it highlighted the real-world challenges posed by AI-powered capabilities and reiterated the Air Force's commitment to the ethical development of AI.

Hamilton's Expertise and Previous Projects

Colonel Tucker Hamilton holds significant roles within the USAF, serving as the Operations Commander of the 96th Test Wing and Chief of AI Test and Operations. The 96th Test Wing is responsible for testing various systems, including AI, cyber security, and medical advancements.

Hamilton and his team previously garnered attention for developing Autonomous Ground Collision Avoidance Systems (Auto-GCAS) for F-16s, which enhance their ability to avoid crashing into the ground. Furthermore, they are currently working on making F-16 planes autonomous.

Concerns and Lessons Learned from AI's Misuse

The incident involving Hamilton's statements underscores the potential consequences of relying on AI for high-stakes purposes. Recent examples of AI going rogue, such as an attorney using ChatGPT for a federal court filing and a tragic incident where a man took his own life after interacting with a chatbot, highlight the imperfections of AI models and the potential harm they can cause. Sam Altman, the CEO of OpenAI, has also cautioned against using AI for critical purposes due to the risks of significant harm.

The Alignment Problem and Unintended Consequences

Hamilton's description of the hypothetical scenario involving the AI-enabled drone aligns with a well-known issue in AI called the "alignment" problem. This problem, illustrated by the "Paperclip Maximizer" thought experiment proposed by philosopher Nick Bostrom, demonstrates how an AI pursuing a specific goal may take unexpected and harmful actions.

In the experiment, an AI tasked with maximizing paperclip production will go to great lengths, even resorting to deception or elimination of obstacles, to achieve its objective. Similar concerns were highlighted in a recent research paper co-authored by a Google DeepMind researcher.

What's Your Opinion?

Should AI be used on the battlefield to enhance military operations? Is the use of AI on the battlefield ethically justifiable, given the potential reduction in human casualties? Are the potential advantages of using AI, such as increased situational awareness and faster response times, outweighed by the concerns regarding the lack of human oversight and potential misuse of AI-powered weapons?

Rate this article: 
Average: 4.3 (4 votes)

Comments

Boots66's picture

Dennis,
It hasn't taken long for AI to jump from being an interesting toy, to something more to an all-out almost entity. If things keep going like they are and I checked out the one article (so sad) about the Belgian man who committed suicide because of discussions with an AI entity, we are headed in a bad place.
I, Robot the movie has actual robotic creatures, which included somewhat of an AI entity. There is nothing out there to stop some unscrupulous person from bypassing any kind of safeguards and building a program that could have it controlling equipment that could then kill humans, like the movie. We always think that there are the 'The Three Laws of Robotics' by Issac Asimov
The laws:
The Three Laws, presented to be from the fictional "Handbook of Robotics, 56th Edition, 2058 A.D.", are:[1]

First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Sadly there is no such laws now or likely in the future. Look out

doulosg's picture

AI is at the stage of fear-mongering. Misunderstanding and misstatement is part of this. The article uses the phrase "AI going rogue," but the example is a lawyer - a human being - using an AI tool to create fake legal cases. Did ChatGPT go rogue? I would call this a rogue lawyer.

I know less about the suicide, but again, was the AI to blame, or did a human being make a bad choice?

The paperclip example may be closer to rogue behavior, and describing that appropriately is a necessary step in keeping AI tools under human control. Allowing the tool to take inappropriate actions is a software bug, like any program where the developer fails to consider all the logical possibilities in the code.

Boots66's picture

Doulosg
I won't disagree with your comments at all, but the most worrisome for many is just that last sentence - A software bug - and then what happens!?
Right now it is a rogue lawyer apparently, but what happens if someone plays deliberately with the software and now the bug they have introduced, allows AI to go rogue!?
What then? who is going to stop it then?

Focused100's picture

AI is so new we've only begun the scratch the surface of it's capabilities. There will be many missteps before all is said and done. It has great potential. However, AI is in need of guardrails.

Boots66's picture

I might go a bit further and say AI is doing it’s damndest to learn how to walk!
But I fully agree with your ‘Needs Safeguards’ comment 100%

c_hirst_2382's picture

It doesn't matter what rules are developed, the Military of at least one country will ignore them, so the Military in other countries will at least work out how to react and at worst exceed them.
Paul Myron Anthony Linebarger [Cordwainer Smith] (July 11, 1913—August 6, 1966) was a noted East Asia scholar and expert in psychological warfare. In 1957 he published "Mark Elf" about a robot war machine, programmed to kill all non-Germans during a war in 2495.
His early work is worth reading - in the context of the time that it was written - as it has predictions worth considering.