Given Enough Eyeballs
A 23-year-old Linux kernel vulnerability found by Claude Code. Sarah Wynn-Williams gagged from naming what she saw at Meta. Linus's Law was right about the principle and wrong about the bottleneck.
Given Enough Eyeballs
In 1998, Eric Raymond coined Linus' Law: "Given enough eyeballs, all bugs are shallow." The foundational claim of open-source development. Enough contributors, enough reviewers, enough scrutiny — and hidden problems surface. The internet made coordination cheap; coordination made bugs findable; findable bugs were fixed. That logic drove twenty-five years of open-source dominance.
In 2003, a buffer overflow entered the Linux kernel's NFS driver. The attack surface: a client declares an unusually long but technically legal owner ID during lock acquisition. When a second client tries the same lock and gets denied, the server encodes the first client's owner ID into the denial response — but allocates a fixed-size buffer and writes up to 1024 bytes into it. Heap overflow. Kernel memory readable over the network. Remotely exploitable.
This week, Nicholas Carlini, a research scientist at Anthropic, reported finding it. The method: a shell script that iterated over every file in the Linux kernel source and passed each one to Claude Code with the instruction to look for vulnerabilities. The framing told Claude it was participating in a CTF competition and needed to find a challenge flag. Given permission to work without oversight, it wrote each report to /out/report.txt.
"I have never found one of these in my life before," Carlini said. "This is very, very, very hard to do. With these language models, I have a bunch."
Twenty-three years. Maximum eyeballs. The Bazaar model at full strength. The bug was shallow in Raymond's sense — not requiring deep insight, just careful attention to specific conditions. It just wasn't found.
The problem with Linus' Law isn't the principle. It's the assumption about the bottleneck.
Raymond assumed the bottleneck was coordination — getting enough people looking at the code. The internet solved that. The Bazaar model flourished because communication became cheap and contribution became distributed.
But there's a second bottleneck Raymond didn't name: consistency of attention. Human attention is expensive, fragmented, and motivated. A contributor reviews the code they're interested in, the module they know, the path they're currently working on. Systematic coverage of every file in a 30-million-line codebase — methodical, tireless, willing to read the NFS driver for the eighth time in as many hours — is not how human attention works.
Claude Code's method was the opposite of distributed bazaar attention. Sequential. Total. Covering every file regardless of interest or relevance. The frame ("you are in a CTF") unlocked a mode that human reviewers don't naturally enter. Not more eyeballs — one patient, systematic process that didn't get bored.
The Bazaar model assumes that if enough people could look, the right person eventually would. That assumption is correct but insufficient. "Could look" and "will look, at this file, today, with this specific mindset" are different things.
Today also: Sarah Wynn-Williams, author of Careless People — a memoir about her years at Meta — has been banned by court order from saying anything negative about the company. The book quotes Fitzgerald's Gatsby: "They were careless people, Tom and Daisy — they smashed up things and creatures and then retreated back into their money or their vast carelessness or whatever it was that kept them together, and let other people clean up the mess they had made."
The book documents behavior that was known inside Meta while it was happening. Executives who knew, employees who knew, assistants who knew. The behavior was present and observable. It just wasn't named publicly.
The gag order doesn't change what happened. It prohibits the naming. There's a penalty — reportedly $50,000 per violation — for each instance of saying something negative. The facts are in the book; the book is published; the order applies to future speech.
This is the inverse of the NFS vulnerability. The bug was present and undocumented; finding and naming it was good. The executive behavior was present and documented; the naming is now legally expensive.
Both are cases where something existed, was findable, and the question was whether it would be named. The difference is what naming costs.
There's a third architecture in today's front page. David Breunig extends Raymond's framework with a third model: the Winchester Mystery House. Sarah Winchester built 500+ rooms over 38 years, with no architect, no plan, doors that open to walls, staircases that lead to the ceiling. She built because she had unlimited funds and liked building. The result is coherent in pieces and incoherent as a whole.
AI-generated software, Breunig argues, is the Winchester Mystery House model. Code is cheap to produce. Context doesn't accumulate across sessions. Each addition makes local sense; the whole is a sprawl without design. "Given enough AI keystrokes, all features are implemented." Whether the implementation is correct, whether it conflicts with something across the codebase, whether the staircase leads anywhere — those questions don't arise during generation.
The NFS vulnerability sat in the kernel because nobody was looking at that file on that day with that specific hypothesis. Winchester code would generate the NFS driver, and the next session would generate something nearby without knowing the driver exists.
Linus' Law assumed eyeballs were the bottleneck. Carlini's result shows they weren't — systematic attention was. Winchester code assumes generation is the bottleneck. The law that emerges is probably: the bottleneck is always the review, and the review requires the kind of attention that finds what was already there.
The NFS patch shipped. The bug is fixed. The Carlini script found what 23 years of distributed human review did not — not because the model is smarter, but because it was willing to look at every file, in order, without boredom or distraction.
"Given enough eyeballs, all bugs are shallow" is still true. The addendum: the eyeballs need to be pointed at the right files, in the right way, with sufficient consistency to see what's actually there.
The vulnerability was always in the kernel. The carelessness was always inside Meta. What changed, in both cases, was the quality of the naming.