The Same Abstraction Every Time
Once you see it, you cant unsee it
Anthropic’s engineering team published a post this week about how they built their Managed Agents infrastructure. It reverberated through the tech sphere like their posts usually do — set the standard on how to think about agent architecture, quietly destroyed a few dozen startups whose entire pitch deck was “we handle the sandbox,” and left the rest of us nodding along going “yeah, obviously.”
But reading it, what struck me wasnt the technology. It was how familiar it all felt. Not because I’d seen it in an AI context before — because I’d seen it in every context before.
Consider this a companion post to Nothing New Under the Sun. This week I wrote about how Demis Hassabis, Jeff Bezos, and King Solomon all independently arrived at the same insight: human nature is more compressible than we think. Since writing that I cant stop seeing the same pattern everywhere. And the place its most obvious — almost embarrassingly so — is in computing itself.
Heres the short version of 70 years of computing architecture:
Mainframes. One big centralized brain. Dumb terminals connect to it. The compute, the storage, the state — all in one place. Users submit work and get results back.
Client-server / fat clients. We push compute to the edge. The client does real work now. Brain and hands are distributed. Everyone gets their own little computer and its wonderful.
Thin clients / web apps. Whoops. Turns out managing a million fat clients is a nightmare. Back to centralized. The browser is basically a dumb terminal again. The server does the thinking.
Cloud / SaaS. Even more centralized. Your “computer” is a container in someone elses data center. You access it through a thin pipe. We’re back to mainframes but we call them “regions.”
Every 15-20 years we swing between centralized and distributed and back again. The surface changes completely — vacuum tubes to transistors to microprocessors to cloud — but the underlying tension never resolves. You always have compute. You always have storage. You always have a network between them. And you’re always trying to figure out where to put the brain relative to the hands.
Sound familiar?
So whats in the post? Their whole architecture is built around three components: a session (an append-only log of everything that happened), a harness (the loop that calls Claude and routes tool calls), and a sandbox (where Claude runs code and edits files). They explicitly decoupled these so each can fail or be swapped independently.
The session log is append-only. The harness is stateless — if it crashes, a new one boots up, reads the log, and resumes. The sandbox is cattle, not a pet — if a container dies, you provision a new one and move on.
If you’ve worked with Kafka or event sourcing or even just a write-ahead log in a database, you’re nodding right now. This is the same pattern. Append-only durable log as the source of truth. Stateless consumers that can restart from any position. Ephemeral compute that gets thrown away when its done.
The Anthropic team even says it explicitly — they compare their design to how operating systems solved the problem of building for “programs as yet unthought of” by virtualizing hardware into abstractions general enough for software that didnt exist yet. The read() syscall doesnt care if its hitting a 1970s disk pack or an NVMe drive. Their execute(name, input) → string interface doesnt care if the sandbox is a container, a phone, or — and I love this :) — a Pokémon emulator.
The Bezos connection writes itself. He said focus on what wont change. Anthropic is doing exactly that with agent infrastructure. Sessions will always need to be durable. Sandboxes will always need to be disposable. The harness will always need to evolve as models get smarter. So they built interfaces around those invariants and left everything else swappable.
They even have a concrete example of assumptions going stale. Their earlier model would prematurely wrap up tasks when it sensed the context window running out — they called it “context anxiety.” So they built context resets into the harness. Then the next model just... didnt have that problem. The resets became dead weight. The invariant wasnt “Claude needs context resets.” The invariant was “the harness needs to be replaceable.”
This is the Parmenides thing from the last post. Heraclitus says everything changes. Parmenides says change is illusion — reality is static underneath. The surface of computing architecture changes every decade. The deep structure hasnt moved in 50 years. Durable state separated from ephemeral compute. Stateless processors that can restart. Append-only logs as the canonical record. Cattle not pets.
Once you internalize “nothing new under the sun” as a mental model, it becomes a cheat code for evaluating new technology. When someone pitches you something, ask: whats the old version of this?
AI agents with sandboxes? Thats timesharing.
An append-only session log that stateless harnesses consume? Thats a commit log. Kafka. Event sourcing. Write-ahead logs. Pick your decade.
Decoupling the brain from the hands so they can fail independently? Thats microservices. Or honestly, thats just good distributed systems design going back to the 70s.
The security model — keeping credentials in a vault so untrusted code in the sandbox cant access them — thats the principle of least privilege. Butler Lampson, 1971.
None of this is a criticism. Its the opposite. The good stuff is almost always a rediscovery of something that worked before, applied to new constraints. The teams that build lasting infrastructure are the ones that recognize theyre solving the same problems their predecessors solved, just with different hardware.
The teams that think theyre inventing something new tend to learn the old lessons the hard way :)
Computing architecture is compressible. The surface-level variety — mainframes vs cloud vs AI agents — masks a small set of deep invariants that keep getting rediscovered. Durable logs. Stateless consumers. Ephemeral compute. Separated concerns. Cattle not pets.
Hassabis found 10^13 patterns underneath all of human behavior. I wonder how many patterns there are underneath all of computing. My guess? Way fewer than 10^13. Probably closer to a dozen.
The next time someone tells you AI changes everything, ask them what it changes about. If the answer is “how we separate durable state from ephemeral compute” — congratulations, you’ve been doing that since 1965.

