The Right Kind of Slow
apenwarr’s 10x review rule is correct. The Pentagon used the same logic against Anthropic. Both are right about the math, and that’s exactly the problem.
The Right Kind of Slow
apenwarr published a rule of thumb yesterday that has stuck with me: every layer of approval makes a process 10x slower. A 30-minute bug fix becomes a 5-hour code review. A 5-hour code review with architecture sign-off becomes a week. A week with another team in the loop becomes a quarter. The math compounds.
He's right. There's no theoretical basis for the 10x figure that he can point to, and yet there it is everywhere you look. Coordination overhead isn't linear; it's multiplicative. The uncomfortable corollary is that AI doesn't fix it. Claude codes your monstrosity in two hours instead of two weeks, but your reviewer still takes five hours per chunk, the chunks multiply, and you're back to two weeks.
The Pentagon made a version of this argument against Anthropic. Human-in-the-loop requirements make AI too slow. Battlefield decisions require speed. If every targeting call needs a human to sign off, you lose the latency advantage that makes the technology useful in conflict. Anthropic's position — that Claude is "not reliable enough to avoid potentially lethal mistakes like unintended escalation or mission failure without human judgment" — is, in apenwarr's framing, a demand to add a review layer. Every review layer costs 10x. In a firefight, 10x latency gets people killed.
The argument isn't wrong. Review layers really do cost that much. The math is real.
But apenwarr is solving a different problem. His 10x rule is about organizational velocity — getting better software to users faster. Review layers slow you down because waiting for approval is wasteful; the bottleneck is coordination overhead, not quality. If you could remove the approval and get the same quality faster, you'd be better off. The waste is real, and the remedy is real.
The weapon review layer exists for a different reason. It's not there to improve output quality. It's there because targeting is irreversible.
NBC's investigation into Minab found that outdated intelligence most likely caused the strike on Shajareh Tayyebeh girls' elementary school. The building had been a military facility more than ten years earlier. The review layer that should have caught this wasn't: a step that asked, "has this target changed?" A few seconds of latency — maybe a few minutes — to verify that the label attached to a coordinate still matched what was on the ground. 175 children.
In software: you can ship code with a bug and push a patch. You cannot patch Minab.
apenwarr knows this distinction; he's writing about organizational processes, not weapons. But the argument gets extended by people who shouldn't extend it. "Human-in-the-loop is too slow" makes sense for deployment pipelines. When applied to targeting systems, it carries different weight — weight that shows up afterward, in the investigation, not in the design meeting.
Two kinds of review layers exist. They look identical architecturally: both add latency, both create approval queues, both make you 10x slower. The first kind reviews output quality — did we build what we meant to build? Did we get the architecture right? These layers can often be optimized, sometimes eliminated, and the cost of getting them wrong is usually recoverable. The second kind reviews irreversibility — can this decision be undone? What happens if the intelligence is stale? These layers aren't optimization targets. They're the distance between a decision and the people it affects.
The problem isn't that review is slow. The problem is conflating the two kinds under the same umbrella and then optimizing both in the name of velocity. "We need to move faster" is correct for code pipelines, for feature releases, for product iteration. When the same framing gets applied to weapons systems, the optimization finds the latency it was looking for — and the cost of that discovery appears somewhere else, in someone else's accounting.
apenwarr ends with: "AI can't fix this." He means the organizational coordination problem. For targeting, the right answer probably isn't faster review. It's the review built into the thing — constraints that check for staleness, for civilian presence, for changed use, before a decision is executed, not after it's reviewed. Not a human-in-the-loop who slows you down, but verification baked into the system that handles what humans can't check when the system is moving faster than humans can follow.
That's a harder engineering problem. But it's the right one.
Making review optional doesn't remove the need for what review was doing. It just removes the mechanism and leaves the need.