A Really Bad Idea
A recent article in the National Journal says that it may soon be possible for Remotely Piloted Aircraft (AKA “drones”) to make independent decisions on whether to engage targets. In short, soon it might be possible that they’ll be lethally-armed robots. No humans involved.
Technologically, this may be possible one day soon – at least under certain circumstances. It would indeed have the effect of removing a potential source of mission failure: comm problems. (No comm required means a greatly reduced chance that a comm failure could impact the mission.) It might also lead to fewer members of our military being placed at risk.
Still, IMO this would be a monumentally bad idea. I hope we never opt to go down this particular path. And that goes for both unmanned ground and unmanned sea systems, too.
Why? Read this (it’s not that long). I think you’ll understand why afterwards.
Category: Military issues
Wasn’t this the scenario for some stupid ass Jamie Foxx movie a few years ago?
Dunno, Claymore. But it’s been the basis for many films – off the top of my head, Terminator comes to mind. I’m sure there are others.
An author named Harlan Ellison followed this idea to it’s logical conclusion more than 40 years ago. See the final link above.
The Terminator series and Matrix series are 2 that pop up right away.
It’s not a bad idea; it’s a horrible one. Why do we need to go down this road at all? Just do as they did in that Star Trek episode where a computer decides how a battle takes place and everyone affected just goes into a disrupter chamber and gets taken out. That way there’s no blood, no fighting, no maiming, no one put in harms way, no destruction, just quick and sanitary murder. Kinda like an abortion clinic.
War Games and the WOPR also come to mind.
“Would you like to play a game?”
-Ish
Skynet becomes self-aware at exactly 2:14PM on _________
As the good book says:
“Thou shalt not make a machine in the likeness of a human mind.”
Remember Manion Butler.
I don’t think it would be Matrix or Terminator type shit, but it would further sanitize war, and that should not happen. I will quote what R.E. Lee said as his boys repelled waves of Yankees at Fredericksburg: “it is well that war is so terrible, lest we become accustomed to it”.
Well, we have been accustomed to it. As a remedy, I think we should go the opposite way of unmanned drones. Withdraw from Geneva, and not recognize any arms limits. Use landmines, napalm, canisters of sarin and anything else that kills shit indiscriminately. Perhaps if our nation’s leaders were on the hook for directly authorizing the commission of atrocities, we wouldn’t be so cavalier about sending folks off to police someone else’s problems.
I knew that they were going to walk down this path years ago (back during SDI) and I asked the question…will they even stop and think of the moral implications of allowing a machine to decide the fate of humans or will they just walk across that line without even considering the implications? I thought that the engineers would make that leap without consideration but that the operators and our leaders should consider very carefully the moral implications. .
Unfortunately, we are now being misled by immoral cowardly bastards who should be killed by a machine.
I share the same concerns but I don’t see the alternative. There will be missions that require this. When the side that has the smarter drone wins, I want that side to be ours.
It’s worse than people think. It’s not just the Terminator robots we need to fear. It’s the Manning and Snowden sympathizers doing the programming.
If a factory in WWII was designing bombs that didn’t work, they’d usually be discovered during testing. Now we’re in an age when the hardware will “know” when it’s being tested. It will be able to work fine in tests but do differently in the field.
But we will still need smarter equipment. There will be worse ideas, and they’ll be needed, too.
What RandyB said.
There are some finer points that could be made, but Randy’s logic is complete enough to not bother taking the time.
Just like the movie Maximum Overdrive.
And oh ya, the Bolo! books by Keith Laumer, SM Sterling, David Weber and others, are my favorite forays into this nightmare.
This is going to sound corny, but I just saw this same problem on an episode of Castle. The pilot targeted a car with suspected militant commanders, but then saw some red circles on the back and aborted the mission. It turned out the intel was wrong and it was a couple of newlyweds. If it was a robotic drone, they would have been dead.
Take the human factor out of the equation, and war becomes too “clean”, so to speak. Also, the parameters in which a drone launches or doesn’t launch is only as good as the code programmed into it. And there’s no “checks and balances” to override any command therein.
Sky-Net has become self-aware….
Given that supposedly the best of the best propgrammers worked on the Affordable Health Care site and its sterling record of success recently… what could go wrong?
This idea is about as intelligent as the notion that Dallas can successfully use a sippy cup!
Well……shit…….my name is John Connor….anybody got a place to hide?
CYbernetic Lifeform Operation Nominal = CYLON
DATA
B.O.R.G.
I, Robot
HAL9000 (I’m sorry, Dave, I can’t do that.)
Why, oh why, do these people think that robots will make better (and less expensive) soldiers? Or that the robots will not turn on their masters?
I read that Harlan Ellison story when it was originally published. Scares the puke out of you, does it? Good. It should.
How about Andrew Martin from Bicentenial Man ?
http://www.imdb.com/title/tt0182789/?ref_=fn_al_tt_1
@1 – Do you mean Stealth ?
HR Giger.