Agentic Math
Why the economics of AI break the SaaS playbook
The Intercom Moment
Last week, much of the technology world was talking about an essay by Eoghan McCabe, the founder and returning CEO of Intercom. In the piece, Eoghan describes how Intercom has spent the past several years transforming itself from a traditional SaaS company into an AI-first business built around Fin, its agentic customer support product. The transition was not incremental. Intercom effectively burned the boats. The company shut down roughly $60 million of ARR tied to its legacy support product and pushed customers toward the new AI system instead.
The bet appears to be working. Fin has grown quickly and has become one of the fastest growing agentic products in technology. Intercom has now crossed $400 million in annual recurring revenue, and Fin itself has already become a nine-figure business within a relatively short period of time. From the outside, the story looks like a triumph: a legacy SaaS company recognized the coming shift toward AI, moved decisively, and emerged as one of the leaders in the new category.
But the story raises a question that very few people seem to be asking.
If your close friend told you they were pivoting their 90 percent gross margin software business into a fast-growing consulting business or T-shirt business, would you be cheering quite as loudly?
Because the math of agents is not the same as the math of software.
The Old Math of Software
For the past twenty years, software has operated under a very specific economic model. Beginning in the early 2000s, and accelerating dramatically once Amazon Web Services launched in 2006 and became widely adopted around 2008–2010, infrastructure effectively became infinite. Instead of buying servers, companies rented compute from the cloud.
That shift fundamentally changed the economics of technology businesses. Once a product was built, the marginal cost of serving the next customer was almost zero. The result was a type of company with unusually attractive financial characteristics: gross margins of 80–90 percent, extremely high retention, and revenue that could scale without meaningful increases in cost. A typical SaaS company might spend heavily to build the product once, but every additional customer generated nearly pure gross profit.
That simple economic reality shaped how the entire industry thinks about software. It informed every part of our mental model about technology. Valuing a company based on revenue. Tolerating high Customer Acquisition Cost due to balance from Lifetime Value.
“Software” was a laden term. And what it was laden with was operating profit.
The Illusion of Infinite Compute
Cloud infrastructure also created a powerful illusion: that compute itself was effectively free.
Technically, everyone understood that AWS bills existed. But in practice, cloud infrastructure became just another line item inside cost of goods sold. Engineers spun up servers without thinking about it. Databases scaled automatically. Storage grew infinitely. Over time, the cost of infrastructure generally declined, reinforcing the idea that compute would only get cheaper.
For two decades, the industry internalized a simple belief:
Compute gets cheaper. Margins get higher.
Agentic systems break that assumption.
Not Your Mother’s Gross Margin
Every action an AI agent performs consumes real infrastructure. Tokens are generated and processed. Model inference runs on GPUs. Reasoning loops call models repeatedly. Retrieval systems query vector databases. Tools are invoked. Memory systems are updated.
Unlike traditional software, where serving another user costs almost nothing, agentic products incur meaningful costs every time they do work. This changes the cost curve. But more importantly it represents a different kind of business entirely.
Agentic companies sitting on top of third-party foundation models incur real costs every time a workflow runs. Compute is not theoretical. It is a bill. A big bill.
OpenAI alone reportedly spent more than $4 billion on compute infrastructure in 2024, largely driven by inference workloads. Analysts estimate that this leaves even the most successful AI companies operating with gross margins closer to 40–60%, far below the 80–90% margins that defined the SaaS era.
This is not an isolated case. Across the industry, inference costs scale directly with usage. As Sequoia has noted in its analysis of the generative AI stack, AI companies collectively spend tens of billions annually on compute infrastructure, meaning the marginal cost of serving customers does not disappear the way it did in traditional SaaS
A software business with 40% gross margins does not function, in any meaningful sense, like your mother’s software business. The death of SaaS is profound in many ways, and one of them is this: we are trading a nearly perfect economic model for something much more constrained—and still calling it software.
Jevon’s Tokens
The argument against bad agentic gross margins is simple: the cost of compute will collapse. As compute goes to zero, agentic businesses will look more and more like traditional SaaS and we’ll quickly get back to 90% margins and money printing machines.
It is true that the cost of tokens for many models has fallen dramatically over time. OpenAI’s API pricing, for example, shows a wide range of model costs, with smaller models priced significantly below frontier systems. Both OpenAI and Anthropic have token costs for smaller older models that are fractions of the models like Opus, Sonnet, or 5.2.
But falling unit prices do not necessarily mean falling total costs. In fact, the opposite appears to be happening.
According to the Menlo Ventures Enterprise AI Report, enterprise spending on foundation model APIs alone reached approximately $12.5 billion in 2025, while the broader AI infrastructure layer grew to roughly $18 billion, nearly doubling from the previous year.
In other words, even as individual token prices decline, total spending continues to rise because companies are consuming dramatically more compute.
Finite Compute
Another assumption embedded in SaaS thinking is this: infrastructure supply is unlimited. But AI workloads run on a very different physical substrate.
Training and inference rely heavily on specialized hardware such as NVIDIA H100 and other AI accelerators, which remain supply-constrained globally. At the same time, the energy requirements for large-scale AI infrastructure are enormous. The International Energy Agency projects that global data center electricity consumption could double by 2030 to roughly 945 terawatt-hours, driven largely by AI workloads.
In the United States alone, electricity demand is already beginning to rise again after years of stagnation. The U.S. Energy Information Administration reports that electricity generation reached a record 4.43 trillion kilowatt-hours in 2025, with consumption expected to continue increasing as data centers and AI infrastructure expand.
The issue is not that electricity is running out. The issue is that the timing, density, and location of power supply do not easily match the explosive demand created by AI clusters.
Infrastructure is a constraint.
Infrastructure has not been a constraint to anyone in the software business for at least 20 years.
Dependent on Data Centers
Nowhere is that constraint more visible than in data centers.
A major report from Lawrence Berkeley National Laboratory estimates that U.S. data centers consumed about 176 terawatt-hours of electricity in 2023, roughly 4.4 percent of total U.S. electricity demand. By 2028, that figure could grow to between 325 and 580 terawatt-hours, representing as much as 12 percent of total national electricity consumption.
Meeting that demand will require enormous investment. McKinsey estimates that the global race to build AI-ready data centers could require as much as $6.7 trillion in capital expenditures by 2030, with the majority directed toward infrastructure capable of supporting AI workloads.
For the first time in decades, software growth is running directly into the limits of physical infrastructure.
New Unit Economics Emerge
A new world of 40% gross margin businesses has very little in common with the businesses we are using to running called software. Nowhere is that more obvious than in any cursory inspection of agentic unit economics.
For decades, venture software companies relied on extremely high gross margins to justify aggressive customer acquisition spending. A typical SaaS company selling a $10,000 annual contract with a five-year customer lifetime might generate $50,000 in revenue and roughly $45,000 in gross profit at 90 percent margin.
But if gross margins fall to 50 percent, that same customer only generates $25,000 in lifetime gross profit. At 40 percent margins, the number drops to $20,000.
Put another way: A traditional SaaS business generating $100M in ARR in roughly equivalent to a $300M agentic business scaling rapidly and dependent on the latest models.
CAC Is Not Falling
This might be ok if customer acquisition costs were plummeting in some way. But, unfortunately, customer acquisition costs do not appear to be declining to compensate.
AI markets are becoming more competitive by the month. Hundreds of startups are building overlapping products. Buyers are inundated with tools claiming to automate every possible workflow. Enterprise sales cycles remain long and expensive.
This creates an uncomfortable dynamic. Gross margins compress. Customer acquisition costs stay high.
And the classic SaaS benchmark that David Skok popularized - namely that LTV:CAC should be at least 3:1 - becomes almost impossible to rationalize. And all of this is before we have any real understanding of the long-term competitive advantage these agentic businesses do or don’t have. Which means we have precious little idea what retention will look like in the new agentic reality.
No More 10x ARR
For years, software companies have been valued primarily using revenue multiples.
The math behind revenue multiples was always precarious but underpinned the essence of SaaS. Namely:
We can acquire customers efficiently
They will stay a long time because switching costs are high
Serving them will be incredibly cheap
In fact, they’ll predictably buy more and more every year
So we have high-margin near limitless growth to look forward to
Look again at those bullets. Which of them is still true?
Cheap Acquisition: False. Acquiring customer is getting more and more expensive as new software companies flood the market.
High Switching Costs: False. Switching costs aren’t high at all. The whole point of AI is you can download a CSV of your data from Salesforce, drop it into Claude, and get BI tools some AI was trying to charge you 20% more to access.
Low Cost of Service: False. Serving customers is now 3x as expensive as it was yesterday.
Growing Net Retention: Unclear. We don’t know. These things are way more expensive than people realize and buyers are going to look long and hard at every dollar once the euphoria dies down.
High-Margin Limitless Growth: No way. The market potential for these businesses is exciting but the underlying economics are significantly worse than we’re used to and poorly understood to boot.
We have no idea how much they’ll buy next year. The world is changing by the day, sometimes by the minute.
In this new world, anyone claiming a revenue multiple is riding a hype wave with no underlying understanding of business economics. Put another way: they’re smoking something.
Agentic Math
For twenty years, software companies benefited from a strange economic anomaly: revenue could grow almost without cost.
Agentic systems bring the laws of economics back into software.
Compute has cost. Energy has limits. Infrastructure is constrained. Margins compress. Acquisition math changes.
None of this means AI businesses cannot become enormous. The market for technology is vast and growing. I’m not betting against humanity’s demand for more.
But. Please understand.
These are different companies than the ones we’ve gotten used to. Our mental models have to update. It’s not software in the way you’ve thought about software for the last two decades. It’s something different.
Welcome to the era of Agentic Math.
Also On My Mind
A few other things on my mind. Let me know what you’d like me to write about
A tale of two narratives. AI is going to destroy all of us. And AI is also going to save all of us. Or both at the same time.
Not everything needs to be a business. If you can create something in 15 minutes so can anyone else. But maybe you just build software to help your clients get an outcome and everything becomes a service.
I’ve never seen an executive employment market as volatile as the one we’re in right now. What does it mean to be an employee in the future? I’m teaching an Executive Compensation and Negotiation course through Pavilion starting Tuesday. Feels as urgent as it’s ever been.
To the point of the Intercom piece, what are you doing right now to radically transform your organization for the world we live in? Every functional area needs to be rethought from the ground up.
Thanks for reading.
Sam
PS If you liked this, feel free to share it with a friend, post it online, reply to this email and say hi. This newsletter comes out every Sunday. If you didn’t like it, unsubscribe freely. No hard feelings.







This was a great summary. Sam. Going to comment/ share on LI
Agentic math includes shifting costs of service (expenses) as users migrate from chat interfaces to agentic workflows. Tokens are the unit of work and the related expense shifts (in part or whole) to the owner/user of the agent consumption, subsidizing the cost of compute.