Notes: Incumbents vs. AI-Natives: Who Wins the Next Software Cycle?

Notes: Incumbents vs. AI-Natives: Who Wins the Next Software Cycle?
SaaS incumbents versus AI disruptors: Who will win in the next cycle of software?

Summary

  • The AI narrative has swung from favoring SaaS incumbents in 2024 to highlighting nimble AI-native startups in 2025, though incumbents remain unseated for now.
  • GPT-5’s incremental release underscored that AI progress may be more gradual than hyped, cooling expectations of dramatic disruption and leaving the field more open to startups.
  • Startups like Cursor, Replit, and Windsurf are scaling revenue with remarkable efficiency, showing how clean-slate architectures and talent density can outpace incumbents.
  • Organizational inertia, path dependence, and diluted talent density leave incumbents structurally disadvantaged compared to startups with small, elite teams.
  • While AI disruption is most visible in software development tools, broader seat-based SaaS models are already seeing investor rotation, whereas consumer tech, ecommerce, and fintech appear more insulated for now.

Note: This is a note that expands upon our previous enterprise software report. In this report, one subscriber spotted and communicated a discrepancy in the results of the payback period calculations. We had made an error by multiplying the change in quarterly gross profit against the gross margin instead of correctly multiplying the change in quarterly revenue against gross margin. We have since made the correction and the trends and payback categorizations are mostly the same, but feel free to check for yourself and provide feedback.

Who will hold the upper hand in the age of AI: the incumbents or the disruptors?

When ChatGPT debuted in late 2022, expectations ran far ahead of reality. By 2024, the prevailing view was that SaaS incumbents — with their data governance, trusted customer relationships, and embedded workflows — were best positioned to benefit from surges in AI capabilities. Yet as we move through 2025, widely described as the Year of AI Application, the narrative has tilted in the opposite direction. AI-native pioneers like Cursor, Replit, and Windsurf are now seen as genuine threats, moving with clean-slate architectures and rapid iteration that incumbents struggle to match. Still, neither the first wave of AI (generative) nor the early innings of the second wave (agentic) have unseated established players. What we’re seeing instead are early signs: deteriorating payback periods and visible capital rotation away from seat-based SaaS, which may or may not prove cyclical. The lesson is clear: even if AI’s near-term impact is more incremental than the hype implied, its long-term potential to reshape value creation remains, and only those willing to reimagine their role in the stack will stay relevant.

GPT-5 and the Shift Toward Smooth AI Adoption

The recent GPT-5 release may mark the turning point in the narrative that AI chiefly benefits incumbents. Before its unveiling, many market participants expected GPT-5 to deliver a dramatic, step-change improvement over GPT-4.1 — echoing the leaps from GPT-3 to 3.5 and from 3.5 to 4. A major breakthrough of that magnitude would likely have reinforced the advantage of incumbents, who already control enterprise distribution, enjoy trusted access to mission-critical data, and have the capital to absorb inference costs and integrate new capabilities at scale. In such a scenario, the winners would have been those able to immediately embed the leap into existing products and customer bases. Instead, GPT-5 turned out to be a more incremental advance. That leaves the field more open to startups, whose nimbleness and clean-slate architectures allow them to compete through rapid iteration and differentiated use cases, rather than being steamrolled by incumbents armed with a generational AI breakthrough.

However, for those paying close attention to Sam Altman’s commentary, such a leap was always unlikely. Altman has made it clear: OpenAI no longer intends to bundle years of research into massive, infrequent model releases that create dramatic performance jumps over their predecessors. Such “bulky” rollouts raise safety concerns — both for the public, who may feel uneasy about abrupt changes, and for alignment researchers, who face greater difficulty understanding and aligning radically new models. Instead, OpenAI has shifted to a more incremental approach, releasing frequent, smaller updates. This strategy allows for a smoother transition — giving both users and safety researchers time to adapt and respond.

Another factor at play is the slowing of the scaling laws. Industry leaders — OpenAI, Anthropic, Google, and others — have recognized that simply increasing data, compute, and model parameters no longer yields the easy, stepwise gains witnessed in earlier generations (GPT-1 through GPT-4). The less-publicized GPT-4.5, for example, was initially conceived as the true “GPT-5,” trained in 2023. Yet, despite its dramatically higher training and inference costs, it showed only limited performance improvements. Under competitive pressure from rivals like Grok-3 and Gemini-2.5-Pro, OpenAI rushed this model into public preview in February 2025, hoping to demonstrate its ongoing leadership. When it failed to meet lofty expectations, the model was rebranded as GPT-4.5.

Reception was tepid — many reviewers criticized GPT-4.5 as costly, with minimal user-visible improvements. However, a lesser-known detail is that GPT-4.5 served as a crucial foundation: it was distilled and used to train o3, which achieved notable performance gains across several benchmarks. Ultimately, the distilled base model was released as GPT-4.1, while the enhanced reasoning variant became o3.

Taken together, these developments have cooled the market’s appetite for dramatic, disruptive leaps in AI capability. Instead, the consensus is shifting toward a smoother, more incremental trajectory — one that, at least for now, seems to favor the incumbents.

Contact Footer Example