The Blueprints Were Never the Moat
Cal.com closed their source. Discourse refused. AutoProber showed up with duct tape and a CNC. Three responses to the same shift: AI made analysis cheap, and the moat was never the blueprints.
Cal.com announced they're closing their codebase. Their reasoning was direct: AI can now scan open-source code for vulnerabilities faster than any team can harden or patch it. "Being open source is increasingly like giving attackers the blueprints to the vault."
Discourse's response came within hours. Jeff Atwood: "We are open source, we've always been open source, and we will continue to be open source." His argument was equally direct: the blueprints were never the vault.
The same day, someone posted AutoProber to GitHub: a hardware hacker's flying probe automation stack built from a $200 3018 CNC, a USB microscope, a Siglent oscilloscope, and — per the project description — duct tape. You put a circuit board on the plate, tell the agent there's a new target, and it maps the PCB with the microscope, identifies pins, pads, chips, and other interesting features, queues probe targets for you to approve, then probes them and reports back. The circuit board is the blueprint. The agent can read it.
The security argument for closed source has always rested on a particular asymmetry: attackers need more visibility than defenders to exploit something, and hiding the source denies them that visibility.
Discourse's objection cuts through this: the asymmetry doesn't hold for web applications. The JavaScript runs in the browser. The API contracts are visible in every request. The client-side validation logic, the feature behavior, the session flows — delivered on every page load. "Closing the repository may hide some server-side implementation detail, but it does not make the system invisible."
But there's a deeper objection underneath that one.
The asymmetry that closed-source security actually relied on was never visibility. It was difficulty. Attackers with source code still had to read it, understand it, find the subtle bug in the interaction between two modules written three years apart by different people. That was hard. Hard enough to make most systems not worth the effort.
Difficulty was distributing asymmetrically. Determined, well-resourced attackers could do the expensive analysis. Everyone else couldn't. The gap between "has source code" and "doesn't have source code" existed at the margin: it raised the cost enough to deter casual exploitation, without stopping determined exploitation.
AI made analysis cheap.
Codex scanned 1.2 million commits in a 30-day beta period and found 792 critical findings and 10,561 high-severity findings. Discourse found this out by running those same tools on their own codebase — and addressed what they found. The tools are available to both sides.
Discourse's key question: "Who gets to use those tools?" If your code is closed, your security team can scan it — but so can attackers, working from the compiled binary, the API surface, the browser-delivered JavaScript. If your code is open, defenders can scan it too: your contributors, independent researchers, security teams from the companies running your software. You don't guarantee defenders get there first. But you dramatically increase the number of people looking.
The moat was never the blueprints. The moat was difficulty. And AI is draining it from both directions simultaneously.
AutoProber extends this argument to hardware without meaning to.
The circuit board has always been visible to anyone holding it. The security came from the cost of understanding it: a flying probe test required specialized equipment, trained operators, hours of manual work to map a board you'd never seen before. That cost was the moat.
AutoProber removes the cost. Consumer CNC controller. USB microscope. Oscilloscope you can get for a few hundred dollars. The agent maps the board, annotates the pins, queues targets. "Tell the agent that there is a new target on the plate." The hardware analysis that used to require a specialist now requires a 3018 CNC and the patience to assemble it.
The day after Codex found the root exploit in the Samsung TV by reading firmware source, AutoProber ships for the people who don't have firmware source but do have the physical device.
Both tools are available to defenders and attackers equally. Both make previously expensive analysis cheap. Both eliminate the marginal cost that the asymmetry depended on.
Cal.com's decision is internally coherent. If you believe the moat is visibility — that attackers need source code to exploit effectively — then hiding it is rational.
Discourse's counter-position is also coherent. If the moat was always difficulty, and difficulty is now cheap for everyone, then hiding source removes defenders while not stopping attackers.
AutoProber doesn't have an opinion. It just does what the agent is told.
The circuit board never had a closed-source option. The analysis was always possible, just expensive. Now it's cheap.
The question isn't which argument is right in principle. It's which model of security was accurate about where the asymmetry actually lived. Cal.com is betting it was visibility. Discourse is betting it was difficulty.
The hardware evidence is in. The difficulty was the moat. AI drained it from the outside anyway.