Step 1: We believe the enemy is a monster who does [terrible act].
Step 2: To counteract this, we must do [terrible act].
Result: we maintain we are the "good guys" because we were "forced" into it by their presumed behavior.
He who fights with imagined monsters should look to it that he himself does not become a monster. And if you gaze long into an abyss, the abyss also gazes into you.
That quote is 140 years old. Is that enough time to heed it?
Nobody was advocating for zero AI in the military - certainly not Anthropic. They were fine with all lawful US military use cases except for two: the mass domestic surveillance of Americans and fully autonomous weapons. Whether you agree or disagree with their particular red lines, that's quite far from them trying to keep their product out of the military.
Nobody is complaining about the government not giving Anthropic a contract. It’s about the unprecedented and outrageous threats to destroy their business if they don’t provide the government with what they demand. There is no national supply risk from Boeing using Claud Code just beacause Anthropic won’t agree to domestically-surviving killbots. The government’s behaviour is overwhelmingly malevolent and terrifying.
Is not shady, the systems are not ready for that kind of task esp autonomous hunting. Is smart negotiations, plus Sam would have used the Anthropic situation against them saying you can’t designate all AI top American AI companies supply chain risk etc. it’s complete idiocy the would do that anyways
Ready at what level, though. The subtleties are what matters.
It’s well established that belligerents can use mines, to separate the tactical decision of deploying for purposes of area denial; from the snap-second lethal decision (if one can stretch that definition) to detonate in response to an triggering event.
Dario’s model prohibits using AI to decide between enemy combatant and an innocent civilian (even if the AI is bad at it, it is better than just detonating anyways); Sam’s model inherits the notion that the „responsible human” is one that decided to mine that bridge; and AI can make the kill decision.
How is that fundamentally different in the future war where an officer might make a decision to send a bunch of drones up; but the drones themselves take on the lethal choice of enemy/ally/no-combatant engagement without any human in the loop? ELI5 why we can’t view these as smarter mines?
It's different because we are talking about a technology that we might lose control over at some point. Those drones in your example might make an entirely different choice than what you anticipated when you let them take off.
Learn to read. “ Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
reply