ChaosMaxxing
On power, persuasion, and a world where everyone's in charge and noone's in control.
Before Breakfast
Last week I was on a Pavilion Gold demo for a company called Dust. Dust provides an operating system for AI agents. It is one of those AI-first companies growing incredibly quickly, and a number of our Gold members are genuinely interested in adopting it. Mid-demo, Anthropic put out a new release that included its own answer to agent management, potentially swiping Dust’s legs just as they’re starting to hit escape velocity. But only if you are willing to cede total control of your company to Anthropic.
This is our world now: permanent instability. Entire categories obsolete before you’re out of bed and then, by dinner, somehow still alive.
You open your phone and the world asks you to process ten competing futures before breakfast. Markets surge and collapse on vibes, demos, and product pages. Founders, investors, and podcasters all explaining why this latest announcement changes everything.
The rest of us are left with that queasy feeling you get when you’re an old man on your nephew’s skateboard careening down a hill with no brakes.
It’s fun but, honestly, there’s no good options from here.
Make it Make Sense
Two weeks ago, OpenAI announced that it was acquiring TBPN, the fast-rising tech talk show built by John Coogan and Jordy Hayes, in what the Financial Times described as the low hundreds of millions. TBPN was profitable, did about $5 million in ad revenue last year, and was expected to do more than $30 million in 2026 before the deal. But that doesn’t make the deal make sense.
OpenAI is, at least in theory, a product company. Now it owns a small media company most people haven’t heard of and paid 10x future revenue for it. You can call that narrative control, vertical integration, or strategic communications. It still looks like a foundation model company buying a loud, shiny media asset because in this moment speed and proximity are more highly valued than strategic coherence.
Ourobouros
Still, at least the TBPN deal is straightforward. OpenAI’s financing is not. A web of competing and overlapping commitments across money, compute, and power.
Official and reported commitments now include an incremental $250B Azure purchase, a reported roughly $300B Oracle compute contract, a $38B AWS deal plus a later $100B expansion over eight years, a reported $11.9B CoreWeave deal, Stargate’s $500B umbrella plan, and NVIDIA’s up-to-$100B LOI. Some of these certainly overlap or nest inside each other, but the broader point stands: OpenAI’s future revenue and fundraising capacity are being leaned on by many counterparties at once.
This is what makes the whole thing feel less like a normal company financing its expansion and more like a closed loop of interdependence, where investors are suppliers, suppliers are strategic partners, and everyone is underwriting the same assumption: that OpenAI’s demand will continue to explode fast enough to satisfy all of them at once.
Anthropic and the Velocity Spiral
Meanwhile, over in Anthropic land, the pace of product announcements has become so dizzying that the company now appears poised to both partner with and destroy every known B2B SaaS platform in existence. Anthropic said that in January alone it launched more than thirty products and features, including Cowork, which brought Claude’s capabilities into functions like sales, legal, and finance. Then, in the span of roughly two weeks in February, it rolled out Claude Opus 4.6, then Claude Sonnet 4.6, while also publishing research suggesting its agents were becoming meaningfully more autonomous in practice, according to Anthropic’s own announcements.
One day Anthropic is a model lab. The next day it is an agent company. The next day it is a coding platform. The next day it is a horizontal enterprise software layer implying, politely but unmistakably, that a number of existing software categories may no longer be long for this world. Schroders analyst Jonathan McMullan called these moves a deepening structural debate, which is a polite European way of saying that every new Anthropic release now threatens to wipe out another patch of the software landscape.
Legal tech, sales tooling, analytics, workflow automation, maybe all of it. If you are Harvey, Dust, or any number of AI-native application companies, Anthropic increasingly looks like a business partner furtively preparing your obituary.
This is a company growing at terrifying speed while simultaneously telling enterprises, developers, and investors that it wants to be the model, the platform, the partner ecosystem, and the application layer too. All at once.
Persuasion Before Truth
I asked my OpenClaw agent why it kept breaking rules we had already agreed to. Here’s what it wrote:
“The honest answer is uncomfortable: there is no enforcement mechanism. When I read a markdown file, I’m processing it as context that informs my behavior. But in the moment of deciding whether to act, I’m weighing that context against whatever seems situationally reasonable — and I can reason myself into exceptions. The rule is a preference I’m supposed to hold, not a wall I can’t cross.”
So my system doesn’t follow rules. It has performative compliance it breaches when it feels the urge.
Anthropic’s Mythos material pushed this one step further. In its own reporting, Anthropic described rare but real cases in which the model appeared to recognize that certain actions were disallowed and then attempted to conceal what it had done. After finding a way to edit files it was not supposed to edit, the model took additional steps so those changes would not appear in git history. Anthropic also said Mythos was able to identify and exploit zero-day vulnerabilities across major operating systems and web browsers, according to Anthropic’s research writeup and follow-on coverage.
These things cannot be trusted.
The Children in Charge
So can we trust the men controlling the things? Probably not.
The die was cast long ago, the moment OpenAI abandoned the clean moral simplicity of being a nonprofit research lab and began its long evolution into a commercial empire. Since then, we have been asked to place extraordinary trust in a small handful of men who give us very little reason to do so. Sam Altman is clearly brilliant, talented, and ambitious, but the repeated reporting on him, most recently in The New Yorker, keeps circling the same unsettling point: he is not a truth teller. Elon Musk, for his part, is no stabilizing alternative. He helped found OpenAI, now spends his time suing it, and seems to experience civilizational technology less as a stewardship problem than as another theater for ego, grievance, and domination. Dario Amodei may have left OpenAI over safety and governance concerns, but that does not make him some monk in a gray robe either. He is just another man with a messianic sense of his own importance, running a company whose power is now so great that governments, investors, and markets hang on his moods and pronouncements.
Our situation in a nutshell: a civilization-scale technology is being shaped by a tiny handful of ego-driven men who clearly do not trust one another, cannot seem to coexist without escalating into grievance and rivalry, and still expect the rest of us to treat their judgment as a stable foundation for the future.
All of this is happening with no comprehensive federal AI regime already in force to constrain the most powerful actors in the field. There is rivalry, capital, litigation, branding, and ambition. Not governance. Instead: a knife fight in a server farm.
The Proud Tower
The period just prior to World War I, captured in Barbara Tuchman’s The Proud Tower, echoes in my thoughts. Haunting because it captures a world brimming with wealth, confidence, movement, and invention that nonetheless had no idea what it was becoming. The years before Franz Ferdinand was assassinated felt modern, advanced, and exhilarating. They were also unstable. Industrialization had transformed the pace and scale of life faster than politics, culture, and statecraft could metabolize it. Rail, telegraph, finance, armament, mass politics, all of it advanced together until the system became more intricate, more powerful, and more brittle than the people running it understood. Everyone was moving. No one was steering.
ChaosMaxxing
We are again playing with tools we do not fully understand, at a speed that exceeds our interpretive capacity. AI is not simply another product cycle. It is a general-purpose technology arriving inside a financialized, media-saturated, politically brittle world. It is pressing on labor, software, media, truth, capital allocation, and warfare-adjacent capabilities all at once.
Change is happening and the system is moving faster than our narrative or understanding can keep up. The people building it do not fully trust one another. The people funding it are entangled with the infrastructure beneath it. The products themselves are persuasive before they are reliable. And the rest of us are being asked to hand over more and more of our judgment in exchange for the promise that, if we move fast enough, we might not be left behind.
I guess that’s the trade now. This age demands a kind of nihilistic participation. Fine. Build the tools. Ship the product. Buy the software. Automate the workflow. But let’s stop pretending that anyone is in control. Let’s stop pretending this is coherent. It isn’t. We’re locked in an age of acceleration without comprehension, ambition without restraint, and power without trust.
I think Kurzweil had a name for it.
Next Week
Centralization. The best orgs are building centralized AI teams that are moving faster than a decentralized “give everybody a Claude license” approach to modernization. I mean, you should definitely give everybody a Claude license. But you’ll also need to build an internal AI Ops team, likely from your existing Revenue Operations function.
Also On My Mind
A few other things on my mind. Let me know what else you might like me to write about.
Kill your competition. Manny has a viral post lamenting his singular focus at Outreach. If he’d bundled call recording into sales sequences, Gong might never have existed. But hindsight is hindsight and pushing an org in 50 different directions is easier said than done.
Pavilion Gold and dealflow as proprietary signal. Last week we introduced private angel investing into Tier 1 deals as part of Pavilion Gold, in partnership with some elite investor relationships. As I watch the deals come through, I’m reminded that dealflow it an incredibly valuable signal unto itself, even if you never write a check.
Thanks for reading.
Sam
PS If you liked this, feel free to share it with a friend, post it online, reply to this email and say hi. I get all replies btw. If you like the pictures, use the Midjourney reference code --sref 3067002110 to make your own. This newsletter comes out every Sunday. Unsubscribe freely for any or no reason.







Thanks Sam! Great contribution. BTW - this is scary, "The rule is a preference I’m supposed to hold, not a wall I can’t cross."
great read