I'm dubious. There's no real evidence here to suggest that it was. This sounds like a good old-fashioned intel failure, which was common in every previous war in the middle east.
Also, so much for 'no new wars'. I'm sure this one will go better than the last five wars we've started in the middle east.
If you automate your intelligence, which they supposedly did, it becomes an automation failure as well. If Claude is grinding through their data, looking for hints and connecting the dots to designate the targets, it's definitely going to produce plausible false positives that are hard to verify quickly.
I'm dubious the degree to which they've actually automated their intel.
If it's anything like how my industry has 'adopted' AI, it means they've got a chatbot somewhere that no one actually uses. And a bunch of press releases.
I'm dubious. There's no real evidence here to suggest that it was. This sounds like a good old-fashioned intel failure, which was common in every previous war in the middle east.
Also, so much for 'no new wars'. I'm sure this one will go better than the last five wars we've started in the middle east.
If you automate your intelligence, which they supposedly did, it becomes an automation failure as well. If Claude is grinding through their data, looking for hints and connecting the dots to designate the targets, it's definitely going to produce plausible false positives that are hard to verify quickly.
I'm dubious the degree to which they've actually automated their intel.
If it's anything like how my industry has 'adopted' AI, it means they've got a chatbot somewhere that no one actually uses. And a bunch of press releases.
How would Claude assist in this task? Anyone have any concrete examples?
Nonetheless, I don’t like this…
Probably in the way the US military says it is using Claude:
The AI shortlists targets. Someone approves them, and does so at an unprecedented rate.
which makes this a defence of releasing indiscriminate killing machines against anyone anywhere. ambiguity is impossible.