One way A.I. will kill us
An alert reader sends along a link to this page, on which, if you scroll down, there's a pretty alarming presentation from a U.S. Air Force colonel about how experimentation is going on artificial intelligence in combat.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
It's not science fiction any more, I'm afraid.
"I am sorry Dave but I cannot jeopardize the mission."
ReplyDeleteI hate to rain on what is a GREAT story, but, it was allegedly a 'thought experiment." https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test
DeleteThe drone forced him to retract the story!
DeleteGeneral "Buck" Turgidson : Well, I, uh, don't think it's quite fair to condemn a whole program because of a single slip-up, sir.
ReplyDeletehttps://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test
ReplyDeleteI sent a link to this to a friend who is super-super deep into math/coding. He responded:
ReplyDelete“Yup, sounds like the software is working as designed.
Sometimes humans are unaware of our Implicit knowledge, i.e. "common sense", that guides our behavior. Such knowledge is easy to overlook when writing down rules for a machine to follow.
I mean, doesn't "everyone" know you're not supposed to kill your friends?
Well, no, machines don't. You have to tell them, just like you have to tell a toddler not to stick a fork in the toaster.
This is why real AI is hard.”
This will work out well, socialjustice-wise:
ReplyDeletehttps://federalnewsnetwork.com/air-force/2021/03/air-force-trying-to-diversify-its-largely-white-male-pilot-corps-with-new-strategy/
Of course the Brits are way ahead of us:
https://news.sky.com/story/raf-recruiters-were-advised-against-selecting-useless-white-male-pilots-to-hit-diversity-targets-12893684