USAF unleashes AI, AI then kills human operator

| June 2, 2023

Artist concept of drone winged fighters near piloted fighter jet. (U.S. Air Force)

So the Air Force programmed an AI to operate a drone to take out surface-to-air missile sites. The human operator behind the drone would give the drone final approval to launch the attack once a site was identified by the drone. They decided to let it loose in a simulated combat environment. To entice the AI, it got “points” for successfully killing a SAM site. When it realized the operator wasn’t letting it kill everything it identified, the AI made the only rational choice. It went all Skynet and killed the human operator. This allowed it to unreservedly kill everything it wanted to, and thus, earn more of those sweet, sweet points. How many dogfighting aces have wanted to kill their commander so they can just go out and get more kills? How many soldiers have wanted to frag their CO when he’s ordering a hold, but the enemy’s in full retreat and easy pickings?

They next decided to make it impossible for the AI to kill it’s human operator. The AI began targeting the communications system that allowed the operator to remain in charge.

What have we learned? Who knows, but I’m sure this will play out in real life, and not just a simulation, alarmingly soon.

Daily Caller;

US Air Force Trained A Drone With AI To Kill Targets. It Attacked The Operator Instead

An Air Force experiment to test drones trained on artificial intelligence (AI) ended badly for the human operator in a simulated mission when the drone bucked the operator’s commands, U.S. Air Force Col. Tucker Hamilton said at a conference in May.

Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator, Hamilton, who serves as the Air Force’s chief of AI Test and Operations, explained at a summit hosted by the United Kingdom-based Royal Aeronautical Society. But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill the operator, underscoring the dangers of the U.S. military’s push to incorporate AI into autonomous weapons systems, he added.

“We were training it in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat,” Hamilton explained.

Programmers instructed the AI to prioritize carrying out Suppression of Enemy Air Defenses (SEAD) operations, awarding “points” for successfully completing SEAD missions as incentive, he explained.

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.

Programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said. The AI just generated creative ways to bypass those instructions.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.

The summit convened experts and defense officials from around the world to discuss the future of air and space combat and assess the impact of rapid technological advancements. AI-driven capabilities have exploded in use within the Pentagon and generated global interest for its ability to operate weapons systems, execute complex, high-speed maneuvers and minimize the number of troops in the line of fire, according to WIRED.

In January, the Department of Defense introduced revised autonomous weapons guidance to address the ““the dramatic, expanded vision for the role of artificial intelligence in the future of the American military,” Pentagon Director of Emerging Capabilities Policy Michael Horowitz told Defense News. It also created oversight bodies to advise the Pentagon on ethics and good governance in the use of AI.

Category: "Your Tax Dollars At Work", Air Force, Artificial Intelligence

47 Comments
Inline Feedbacks
View all comments
11B-Mailclerk

“I’m sorry, Dave…”

Anonymous

AW1Ed

Skynet grins.

Sapper3307

The prophesy.

5341972.jpg
MustangCPT

And yet we keep fucking around with this…

Anonymous

Truly:
comment image

MustangCPT

Yeah, except that if the kid survives, he’ll never do that again. So our defense establishment doesn’t have the intellectual development of a pre-schooler.

Dave

I did that twice as a kid. Not the sharpest pencil in the box. Still aren’t.

SFC D

I only did it once. Dad sat right there and watched me. He knew it was gonna hurt but wasn’t gonna kill me. The painful lessons are the ones that stick

Anonymous

Stuck my finger in a light socket– it was about as fun as expected:
comment image

timactual

I only did it once. It makes quite a vivid memory.

rgr769

Well, just look at the clowns that are in charge of it. Their woke-ism clouds their view of reality.

Anonymous

Reality is a buzzkill for them.

5JC

Have no fear, Kamala Harris is on the Job.

Anonymous

And we thought the most noteworthy thing about the movie Stealth was Jessica Biel in a tight flightsuit…

Last edited 1 year ago by Anonymous
Graybeard

I grew up around computers (literally) and was a programmer for 20+ years.

No way in Hell do I trust any computer program – especially AI

Human beings are always and will always be the best at making these decisions.

QMC

It’s almost like there should be a movie or something about this… wait…

LC

I’m … deeply skeptical of this story. With the current state of AI, it’s just so implausible. One thing, like taking out the drone operator (how did the drone know where they were?) is conceivable, with a piss-poor training of the AI. Two things, like doing that, then taking out comm towers, is pretty god damn unlikely.

I’ll wait and see what more we hear about this before changing my name to ‘T-1200’ to blend in.

AW1Ed

You doubt what’s on the interwebs, LC?

doubt.png
Anonymous

comment image

Anonymous

Or this one, whichever…
comment image

LC

At first I thought it was just me, but then nudged the three supermodels in my bed to get their take. They, too, agreed that maybe you shouldn’t believe everything you read online!

5JC

LC – Pretty simple really. If the AI were targeting things then it might have to know where the human is so that he could avoid killing him by accident. Obviously it was done on purpose. They then wouldn’t tell him where the human was so he destroyed the tower.

LC

In a simulation? I’m doubtful, especially since more often than not a drone operator is well outside the target zone of.said drone. So why add that to the simulation?

Besides, drawing the conclusion that taking out the operator would mean two really dumb things. First, that the AI can reason, which despite sci-fi tropes galore isn’t the case with our current stuff. And second, that whoever programmed the system was so god damn stupid that they made the operator the safety – that is, the drone is constantly free to engage unless the operator says otherwise. That’s a recipe for disaster! Jammed comms? Fuck, there goes the neighborhood.

I’m still calling bullshit.

5JC

More often than not?

Now I’m throwing the bullshit flag. Absolutely no way that there are more drone operators out of theater than are in theater. The volume of drone operations by operators in theater outnumbers those out of theater by hundreds to one. For someone who claims to something of drone operations you don’t seem to know much.

LC

I should’ve specifically said this kind of drone.

I’d bet you a beer that an autonomous drone on a SEAD mission is not piloted by people in close proximity to those very enemy air defenses it’s trying to destroy.

And no, I don’t claim to know much about drone operations, but I do know a thing or two about AI.

Old tanker

Some folks just cannot take a freaking hint. You make an electronic warrior without anything resembling ethics and turn it loose to seek rewards from killing? WTF did they think was going to happen?

KoB

Sarah Conner sez…”I told you so…”

26Limabeans

Let loose the dogs of war.

MustangCPT

*Woof Woof*
That’s my other dog! 🐶

AW1Ed

Not my dog.

MustangCPT

I was thinking more along the lines of:

F44BBD22-5A99-40FA-B523-D630D95D7C0F.jpeg
jeff LPH 3 63-66

I used to own a talking dog which was also a mathmatical expert, so one day I asked the dog what 4 minus 4 was and the dog said nothing.

CSCT396B

For anyone that is confused, this did not actually happen. This was a computer simulated test. No one was actually harmed in real life.

And now, the USAF is denying that this simulation was conducted in the first place, with an USAF spokesperson claiming this was all a “hypothetical” discussion by Col Hamiltion.

https://www.businessinsider.com/ai-powered-drone-tried-killing-its-operator-in-military-simulation-2023-6

5JC

LOL, that would be something if the AI killed a human and then they continued on with the test.

“Hey, let’s see what else it can do…”

rgr769

Well, since the federal government doesn’t give a shit about individual citizens, that would not be too surprising.

SFC D

Av8or33

True or not, still not a smart idea.

President Elect Toxic Deplorable Racist SAH Neande

Wait…..what?! This isn’t DuffleBlog?
ROFLOLMFAO!!!

Eggs

Well someone should have put number 4 in the drone’s list of Prime Directives 😎

Skivvy Stacker

Nah. Then it would have figured out a way to identify as Captain Kirk and ignore the Prime Directive.

Eggs

I was thinking more Robocop than Star stuff

timactual

Looks like they forgot Isaac Asimov’s Three Laws of Robotics—-

“The laws are as follows: “(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” ”

https://www.britannica.com/topic/Three-Laws-of-Robotics

11B-Mailclerk

With the US government involved, we are far more likely to get Jack Williamson’s “humanoids” that “protect” us to utter ruin than Asimov’s three-laws robots.

Anonymous

AF now claims it was just a “thought experiment” or they “mis-spoke” or something…
https://finance.yahoo.com/news/ai-operated-drone-kills-human-090205694.html

5JC

Yep, complete fabrication. But what was he furtively glancing to off camera?

https://boingboing.net/2023/06/02/usaf-colonel-changes-his-story-about-simulated-ai-drone-murder.html

Anonymous
Last edited 1 year ago by Anonymous