The Enterprise Realities that SaaSpocalypse Doesn’t Understand
Welcome to a new edition of The Board: Distillation Aftershots (*).
This is an online copy of a weekly newsletter that shares curious and interesting insights and data points distilled from enterprise technology to identify what’s notable.
In this issue, we are going to avoid distracting you from the game, discuss the latest “agents are coming to take over our work and our lives” announcements from recent days, but dig a little deeper beyond the panic and mass hysteria that became SaaSpocalypse.
First, my take.
This week’s announcements from Anthropic and OpenAI were framed as breakthroughs in autonomous agents and “AI coworkers.” What was notably absent was a serious treatment of governance grounded in an understanding of the deeply complex enterprise application environments that have evolved over decades of fine-tuning. When agents operate across these complex ecosystems without explicit guardrails, they introduce legal exposure, compliance failures, and operational damage. That is extreme risk creation.
The market reaction (dubbed SaaSpocalypse) reflects a fundamental misunderstanding of how enterprise software creates value (and how enterprises leverage it, including the convenience of having a dedicated resource to ensure it works and remains compliant). The value is not automation of processes, standardized data integration models, customized interfaces, or workflow convenience. It is accumulated nuance: regulatory interpretation, compliance logic, customer-specific customization, exception handling, regression compatibility, legacy integration, and edge cases that only appear at scale – and that is before we even address the issue of a responsible party that is immune to corporate politics. These are not abstractions that an agent can infer by navigating an interface. They are encoded institutional memory, and any agent capable of operating safely at this level must be constrained and governed, essentially making it closer to traditional software than its marketing suggests.
Vibe coding fits this pattern: it is real, but shallow by design. It excels at scaffolding and prototypes, not deeply optimized or mission-critical systems. Public models are becoming the Excel of the AI era: powerful, accessible, and indispensable, but they are not replacements for enterprise platforms. Excel already plays this role; over 85% of organizations rely on it for core processes, even where enterprise systems exist. Excel did not kill ERP or CRM: it filled the gaps they could not economically address. Claude Cowork (and vibe-coded agents) should be viewed the same way: it operates above the stack laid out by enterprise software, but it still needs the underlying stack to exist.
Finally, I am not going to address the stupidity of Moltbot because it is not (supposedly) a liability to the enterprise. Alas, someone will figure out how to make it liable soon... Sigh.
Here are some reading resources:
- First off, I always look at fellow Constellation Research Larry Dignan for fast, accurate, and powerful insights on market news. He covered both Claude Cowork and OpenAI Codex, as well as OpenAI Frontier. If you are not subscribed to his insights newsletter, you are missing out on the best reporting in the industry (seriously, he is that good).
- IBM put together a very well-laid-out explainer of what the launches mean and how they impact enterprise technology. It is not biased, and it is well written – if you want a first-level analysis, give it a whirl.
- Then we move to SaaSpocalypse, where the usual suspects (WSJ, Financial Times, Barrons, and Reuters) all covered the rout and what it means for the markets, and the software industry, mid- and long-term. You can read them all or pick your fave (some of these require subscriptions, so there is more than one).
- And now for the fun part, the actual analysis of the coverage without hype: Ars Technica tackles the rumor that Codex built itself (spoiler: it didn’t, no surprise), and Techcrunch looks at the potential for the enterprise. Both work and reading to further understand the challenges I laid out above.
- Two more things: the tale of a Platformer reporter that tried to replace herself with a set of agents (similar to my tale from 4 months ago when I dove deep into Python to come back – unamused).
- Finally, a great take that aligns with my above statements from one of the enterprise technology architects that I highly respect (don’t tell him that, he thinks I don’t like him…it’s better for our debates): Anshu Sharma from Skyflow. He was one of the original architects for parts of Salesforce, and trust me – he knows what he is talking about.
What’s your take? We are fostering a community of executives who want to discuss these issues in depth. This newsletter is but a part of it. We welcome your feedback and look forward to engaging in these conversations.
If you are interested in exploring the full report, discussing the Board’s offering further, or have any additional questions, please contact me at [email protected], and I will be happy to connect with you.
(*) A normal distillation process produces byproducts: primary, simple ones called foreshots, and secondary, more complex and nuanced ones called aftershots. This newsletter highlights remnants from the distillation process, the “cutting room floor” elements, and shares insights to complement the monthly report.