Updates: Palantir - 2025 the Year of AI Application?
Summary
- While PLTR currently trades at a high multiple (~450x TTM P/E, ~150x NTM P/E), its earnings trajectory suggests significant upside potential.
- If PLTR can sustain strong revenue growth and reduce SBC to ~10% of revenue, its net income margin could expand from 16% to ~60%, narrowing the profitability gap with NVDA.
- Unlike OpenAI and cloud hyperscalers, PLTR built AIP from real-world enterprise use cases, making it more mature and deployment-ready for businesses.
- The return of a Trump administration could accelerate government IT modernization efforts, favoring PLTR over legacy vendors.
- PLTR’s broad data integration and security make it better suited for highly regulated industries compared to OpenAI’s more consumer-focused and developer-centric stack.
In this article, we take a contrarian approach to explore a potential bull thesis for PLTR, despite its lofty valuation and the massive gains from its recent trough—factors that typically signal weaker forward returns. Our goal is to outline our thought process and present a well-reasoned thesis that, while carrying inherent risks, has a solid chance of playing out.
Contrarian thesis for PLTR?
As 2024 drew to a close, our team grappled with how to position ourselves on Palantir Technologies (PLTR) — whether to go long, short, or hold. It was a challenging decision. On one hand, PLTR’s valuation appeared stretched, with metrics like an EV/S ratio exceeding 40x and a P/E ratio north of 40x. On the other hand, the momentum behind the stock showed no signs of fading. After rigorous internal debate, we decided to maintain a long position on PLTR.
Many professional investors, applying their heuristics would dismiss PLTR as overvalued. Through the traditional lens of hedge fund portfolio managers, who often favor value-oriented strategies and low multiples, PLTR’s metrics can seem unjustifiable. This bias typically steers them toward lower-growth, higher-visibility, and less-volatile opportunities — investments that reliably deliver a solid 15% return to satisfy fund targets but rarely exceed that threshold. However, this approach frequently overlooks transformative outliers like Tesla (TSLA).
We’ve long held the view that PLTR could follow a trajectory similar to TSLA. Many professional investors risk misjudging or underestimating Palantir’s business model and technological edge. Meanwhile, the company’s profound innovation is poised to drive rapid compounding growth — albeit with significant volatility — making it a rare opportunity that defies conventional valuation frameworks.
Source: Koyfin
Interestingly, if you examine the P/GP multiple curves of PLTR and TSLA, you'll notice striking similarities — particularly during hype-driven periods such as 1Q21, when strong company results drove multiples significantly higher. In recent quarters, however, TSLA's growth has stumbled for various reasons, whereas PLTR has successfully rebounded. Despite these different trajectories, the two companies' P/GP multiples tracked closely between mid-2024 and year-end 2024. This synchronization largely stemmed from PLTR's sustained revenue growth acceleration, complemented by an impressive expansion of profit margins. The margin expansion highlights PLTR's exceptional operating leverage — a factor we have consistently emphasized, despite skepticism from investors.
Yet, speculating purely on continued liquidity-driven momentum is akin to gambling: without superior information or analytical insight, you're relying solely on luck. Thus, what exactly constituted the fundamental investment thesis for PLTR, as of December 31, 2024 — beyond mere technical momentum or speculative positioning? Primarily, it hinged upon the continuation (and potential acceleration) of revenue growth and margin expansion.
Rather than decelerating, PLTR's growth reacceleration has gained further momentum. This is evident in the sharp rise in annualized quarter-over-quarter (aQoQ) growth, which surged from 31% to 69%—the highest 4Q aQoQ growth rate since 4Q19. At that time, PLTR’s ARR was just $918m, compared to $3.1bn today. Even in the prior quarter, when aQoQ growth was a more modest 31%, the YoY growth trajectory had already begun to rebound from its trough, signaling no clear reason to expect a slowdown. Robust demand across PLTR’s product suite further reinforced this outlook.
From a net new ARR perspective, PLTR had historically peaked at around $150m per quarter. However, given the rapid adoption and strong ROI of AIP, surpassing that level seemed increasingly probable. This thesis was confirmed as net new ARR soared to an all-time high of $408m — nearly equivalent to PLTR’s total quarterly revenue in 4Q21 — underscoring the company’s accelerating growth trajectory.
Palantir Financial Metrics by Convequity
AI
Firstly, the momentum behind PLTR's AIP isn't losing steam; rather, it's just beginning to gain traction as more deals progress through lengthy enterprise sales cycles. PLTR's unique positioning — targeting large multinational enterprises and government sectors — results in significantly longer deal cycles: approximately 3x that of SMBs, and twice as long as typical enterprise sales. When PLTR initiated a private preview of AIP in mid-2023, it immediately generated more leads in a short period than during the entire previous year (2022). Notably, this period coincides precisely with PLTR's inflection point in YoY revenue growth. Due to enhanced deployment speeds and improved product documentation, PLTR now converts many of these initial leads into experimental (initial) revenue within roughly one year. However, the full conversion of these customers into scaled, mass-production deployments typically requires at least two to three years. Thus, from a timing perspective, by the end of 2024, we were just beginning to see AIP move into full throttle — driving significant incremental growth for PLTR.
At the core of PLTR’s dramatic success is its ontology, a critical differentiator we've consistently discussed on X and previous PLTR reports, as PLTR’s strongest competitive advantage. Ontology enables enterprises to quickly and safely deploy generative AI and LLMs at scale. This advantage became particularly clear starting in November 2022, when the hype around LLMs surged and the enterprise AI stack began rapidly emerging and maturing, with numerous startups developing specialized AI components. PLTR, however, leveraging years of rigorous R&D across multiple layers of the data stack, was uniquely positioned to release a mature, production-ready AI platform well ahead of industry peers. Throughout 2023 and 2024, our ongoing research into various AI startups and stack architectures consistently reinforced our conviction that PLTR leads the market by a wide margin — not only in product maturity but also in enterprise readiness across critical areas such as data integration, governance, visualization, security, and compliance.
Beyond the direct boost from AI products, PLTR is also benefiting strongly from the broader economic shift towards AI-related spending. Even as overall IT budgets face scrutiny and cutbacks due to economic uncertainty and spending rationalization, AI budgets remain resilient, with approximately 50% of AI investments funded by reallocating resources from other IT budget categories. A crucial aspect of this trend is investment in robust underlying data infrastructure — without a strong data stack, production-ready AI applications simply aren't feasible.
Consequently, demand for PLTR’s core Foundry and Gotham platforms has accelerated, bringing them out of their deceleration phase faster than industry peers. Given these dynamics, we see no reason to anticipate a reversal in 2025. In fact, we expect PLTR to have an even stronger year ahead, driven by several key tailwinds:
- Economic recovery and improved IT spending in 2025: Broader economic recovery will enhance overall IT budgets and sentiment, creating additional tailwinds for AI and data stack investments.
- DOGE boost: DOGE initiatives will enable continued growth acceleration while simultaneously reducing reliance on sales and marketing expenditures.
- Shift in AI spending toward ROI-focused applications: As enterprises transition from experimental stages to practical, ROI-driven AI deployments, demand will increasingly flow toward mature, production-ready providers like PLTR.
Moreover, the overall economy and IT spending sentiment are likely to improve further in 2025. President Trump's second term has garnered increased popularity and optimism within the business community, with economic policies poised to stimulate significant growth. In an environment characterized by recovering IT budgets and heightened economic optimism, enterprise spending specifically for AI and data stack infrastructure is poised to accelerate even further — positioning PLTR for continued outperformance throughout 2025 and beyond.
DOGE
The core premise of the PLTR bull thesis hinges on sustained AI investment, with no major slowdown or cyclical downturn in AI capex — particularly for NVDA. Assuming continued AI spending, we are now at a stage where frontier model development has significantly slowed, despite recent releases like Grok-3 and reasoning models.
As highlighted in our SNOW report, this deceleration in foundational model progress has amplified the need for system-level innovation to enhance model performance. PLTR’s AIP is crucial in this regard, as it augments base models, enabling them to deliver superior results compared to standalone models.
More importantly, this also addresses the growing AI ROI challenge. Even if frontier model advancements continue, they will drive inference serving costs another 10x higher. Entering 2025, the AI ROI problem has become mainstream—something we’ve been questioning since 2023. To solve it, enterprises must:
- Leverage smaller models fine-tuned or integrated with RAG to outperform larger, more expensive models.
- Utilize best-in-class data integration tools like PLTR’s SDDI, which unifies fragmented data silos into a single semantic layer via PLTR’s ontology.
- Implement robust governance for data access, ensuring both human users and LLM agents can retrieve necessary information while minimizing security risks and privacy concerns.
- Rapidly develop real-world AI use cases that drive automation, enhance decision-making, and generate high ROI—not through heavy engineering and coding, but via intuitive low-code interfaces accessible to business analysts.
These capabilities form PLTR’s competitive edge. The company has spent years developing these foundational technologies, even when immediate returns were unclear. Now, with the industry shifting from AI research to AI deployment, PLTR’s recent growth should persist rather than be a one-off event.
On March 6, 2025, DOGE provided further details on excessive government spending on unused software licenses — an outcome we predicted back in November 2024. We expect further reductions in government software expenditures, with the cuts primarily impacting legacy technology rather than advanced solutions that have yet to reach mass deployment.
So far, DOGE has focused on eliminating waste in software and productivity tool licenses that do not affect government operations or mission-critical legacy systems. However, we believe that, in time, even mission-critical software will face modernization efforts. When that moment arrives, PLTR stands to benefit from a significant tailwind.
What's next?
Our thesis for PLTR has played out far better than we could have imagined, delivering exceptional returns. However, with the stock having already risen more than 10x by the end of 2024, maintaining the same level of bullishness is now a highly contrarian and risky stance. That said, there’s a compelling case that PLTR’s growth could accelerate further, approaching ~70% in the near future. If this is coupled with a sustainably high FCF margin — as indicated by the 55% margin reported in 4Q24 — then the Rule of 40 could surge toward 120, an extraordinary level for a two-decade-old SaaS company with over $3bn in ARR. In that scenario, despite the sharp rally, PLTR could still have substantial upside from here.
Source: Convequity
Looking ahead, we expect PLTR to sustain growth rates above 50% for an extended period. Given the compounding momentum and strong AI investment tailwinds, further acceleration — potentially toward 70% or higher — remains feasible, even if the exact trajectory is uncertain.
Despite already achieving a 55% FCF margin in 4Q24, Palantir’s significant operating leverage suggests further upside. As we outlined in our earlier research on terminal FCF margins, exceptional companies often defy conservative expectations that cap these margins at 30%-40%. With years of foundational technology investments in place, PLTR requires minimal incremental operating expenses to scale. Its internal dogfooding of AI tools further enhances efficiency while reducing costs. So, while the FCF margin in Q1, Q2, and even Q3 might not be as high as 4Q24 - a quarter boosted by budget cyclicalities, especially for PLTR - we bet that quarterly FCF margin will make higher highs in the coming years.
As PLTR expands beyond its current few hundred customers, its gross margin — already at 79% GAAP and 83% non-GAAP — could rise further. With thousands of customers and multi-billion-dollar contracts from enterprises and governments, economies of scale and pricing power may push gross margins closer to 90%, reinforcing its ability to sustain high FCF margins over time.
Source: Convequity Analysis
Investors should remain mindful of PLTR's relatively high SBC. Unlike NVDA, which saw its SBC as a percentage of revenue decline following the ChatGPT-driven surge in AI spending, PLTR’s SBC percentage has increased. This rise is largely due to the company’s sharp share price appreciation — outpacing NVDA’s — and the significant value of past equity grants to employees. While this could raise near-term concerns, we expect SBC to gradually decline over time as Palantir scales, matures, and benefits from operating leverage, much like NVDA has as a more established company.
PLTR = NVDA T-2
In many ways, we view PLTR today as similar to NVDA two years ago. At that point, NVDA was beginning to experience early signs of AI-driven growth acceleration. However, investor sentiment was split regarding whether this growth trajectory would be sustainable. The uncertainty arose because NVDA was undergoing a fundamental business transformation — from a cyclical gaming GPU-focused company into an enterprise-focused, data-center-driven AI GPGPU provider. This transformation represented a paradigm shift, unlocking significantly larger revenue potential and more durable long-term growth, notwithstanding ongoing cyclicality in customer purchasing patterns.
Eventually, NVDA's financial performance benefited from a double boost: explosive revenue growth combined with significant margin expansion. Semiconductor businesses are inherently leveraged, as they can sell incremental units without additional intellectual property investments. NVDA's incremental costs primarily involve manufacturing expenses (COGS), handled by vendors such as TSMC for chip fabrication and SK Hynix/Micron for HBM memory. Due to overwhelming customer demand, NVDA successfully raised prices for its GPGPUs, achieving substantial gross margin expansion — peaking at 78%.
Examining operating expenses, NVDA’s R&D expenses scaled roughly proportionally with revenue growth. S&M expenses remained relatively stable, yet sales efficiency improved dramatically due to heightened media attention and customer demand actively outpacing supply. Meanwhile, G&A expenses decreased as a percentage of revenue, reflecting NVDA’s disciplined, operationally effective management — distinct from legacy companies burdened by layers of professional management bureaucracy.
As a result, NVDA delivered massive earnings increases, driven by a combination of robust revenue growth and margin improvements. Ultimately, NVDA’s value proposition became so compelling that even value-oriented investors found its forward P/E ratio attractive, recognizing its combination of relative affordability, financial strength, and exceptional growth prospects.
In parallel, if PLTR continues its current business momentum, it would achieve a Rule of 40 score of approximately 120. Given this metric alone, its current EV/S TTM of 60x appears justified. Typically, hypergrowth companies exhibiting growth rates above 40% command EV/S multiples around 40x, while those exceeding 60% growth often garner even higher valuations. Unlike many hypergrowth peers that have lofty valuations immediately following their IPOs — often with less than $1bn ARR and negative margins — PLTR already operates at a substantial $3bn ARR level with high profitability. Furthermore, PLTR is well-positioned for continued margin expansion in upcoming quarters, provided its growth trajectory remains intact.
Investors should recognize, however, that PLTR’s quarterly results might exhibit volatility due to inherent seasonality and uneven distribution of growth. As long as PLTR maintains consistently positive net new ARR growth YoY at its seasonal peaks, investors need not be overly concerned by intra-quarter fluctuations. Conversely, if net new ARR growth falls below zero, investors should exercise heightened caution.
Source: Convequity
When viewed from a P/E valuation perspective, PLTR currently trades at an exceptionally high multiple — approximately 450x on a TTM basis and about 150x on a NTM forward basis. For professional fund managers, allocating additional capital to PLTR at these elevated multiples may pose significant reputational risk, as the potential downside could outweigh the incremental return.
However, despite the high headline valuation, it's important to note the underlying earnings trajectory. PLTR's forward P/E multiple (150x forward vs. 450x trailing) implies an anticipated threefold increase in earnings over the next year alone. To contextualize this further: for PLTR to align with NVDA's current valuation multiples — approximately 40x trailing P/E and 25x forward P/E — PLTR's earnings would need to increase roughly 11 times from current levels.
The substantial gap in valuation multiples between PLTR and NVDA partially stems from significant differences in profitability metrics. NVDA's net income margin (LTM) currently stands at 56%, compared to PLTR's more modest 16%. NVDA achieves this superior margin profile partly due to its much lower SBC, which currently represents only around 3.4% of revenue and approximately 8.5% of FCF. In contrast, PLTR's SBC has historically been significantly higher, recently reaching as much as 34% of quarterly revenue and averaging around 20% in prior quarters.
Nevertheless, if we shift our lens to FCF margins, PLTR demonstrates strength. Its latest quarterly (4Q24) FCF margin of approximately 55% closely tracks NVDA's robust FCF performance. Given PLTR's inherently better operating leverage (due to a software-driven business model with minimal incremental costs), it has the potential to surpass NVDA's already impressive FCF margins.
Assuming PLTR can achieve a 70% FCF margin in the future while simultaneously bringing SBC down to approximately 10% of revenue, the resultant net income margin could approach 60%. Under such a scenario, PLTR's net margin would expand approximately 3.75x relative from its current margin of 16%, significantly improving its earnings profile and narrowing the valuation gap relative to NVDA.
Source: Convequity
From a top-line growth perspective, there's a strong argument that PLTR has significantly more runway ahead compared to NVDA. While NVDA is likely to continue benefiting from robust demand — particularly given recent announcements of hyperscalers deploying gigawatt-scale data centers (for example, a single 1.25 GW data center equipped with GB200 NVL72 racks costs around $30 billion) — the law of large numbers and physical constraints suggest that NVDA's potential to double or quadruple revenue again becomes increasingly challenging.
In contrast, PLTR — currently at approximately $3.3bn ARR — has greater headroom to achieve multiples of its current revenue. We see PLTR as a foundational technology platform, analogous in fundamental importance and market positioning to AWS. AWS, for instance, has already surpassed $100bn ARR (though its growth has moderated to double digits following recent IT spending slowdowns). Considering PLTR's current scale, achieving another 2x or even 4x revenue seems easily achievable.
Specifically, if PLTR can sustain around 70% annual revenue growth over the next two years — reaching an ARR of approximately $9.54bn — and simultaneously realize the previously discussed 3.75x improvement in net income margin (from ~16% currently to around 60%), its valuation would significantly normalize. Under these assumptions, PLTR's trailing P/E multiple would fall dramatically from the current ~450x level down to approximately 41.5x — roughly in line with NVDA’s current valuation.
Of course, this optimistic scenario assumes PLTR continues executing successfully, and we haven't yet addressed potential downside risks and uncertainties. Nonetheless, given PLTR’s market leadership, the depth and breadth of its technology platform, and its ability to deliver meaningful value across diverse sectors globally, we believe it's only a matter of time before PLTR achieves $10bn in ARR— and possibly eventually scale toward $100bn, akin to AWS's current level.
In the interim, investors must recognize the possibility for significant volatility. Similar to Tesla’s historical trajectory, PLTR may encounter periods of crisis, turnarounds, and pronounced market skepticism (FUD—fear, uncertainty, and doubt). These factors can drive sharp fluctuations in share price and valuation multiples, shifting between extremes of optimism and pessimism.
Therefore, investors must carefully manage near-term volatility and risk exposure. Yet, from a long-term perspective, we find the fundamental thesis outlined above difficult to refute — PLTR’s strategic positioning, technology leadership, and long-run growth potential remain highly compelling.
2025 the year of AI application
"After speaking with various tech leaders, we are very close to the cusp of AI finally showing up in productivity numbers." This statement was made by US Treasury Secretary, Scott Bessent, during an interview with Bloomberg on February 21, 2025.
As we examine PLTR's future growth prospects and valuation, it becomes increasingly clear that the long-awaited thesis around widespread AI application could finally materialize in 2025. Two years ago, the primary investment theme in the AI sector revolved around building and expanding infrastructure to support and nurture the broader AI ecosystem. Today, most of this foundational infrastructure is already in place, shifting our focus toward practical, real-world applications.
This shift is critical for two main reasons. Firstly, without compelling end-user applications, further investment in infrastructure becomes difficult to justify, thereby capping the potential for future infrastructure growth. Secondly, we are already witnessing tangible ROI from AI adoption across various sectors. In areas such as software engineering and customer support, numerous real-world cases demonstrate productivity improvements of approximately 30% to 40%. Given these significant productivity gains, it is only logical for enterprises to actively pursue broader AI adoption across additional business processes and use cases.
On the AI model development front, we are seeing rapid maturation, particularly in terms of value delivered per unit of computational power (FLOPS). Technologies such as DeepSeek represent a significant boon for PLTR, despite the notable irony that PLTR's strategic alignment generally positions it against Chinese technology companies. Nevertheless, continued advancements from DeepSeek, Alibaba's Qwen, and other open-source initiatives could dramatically reduce AI model training and inference costs — potentially by more than tenfold. Such substantial cost reductions enable developers focused on application layers (including developers using companies like PLTR) to deploy AI solutions more extensively and cost-effectively, even in scenarios where the previous ROI justification was less apparent.
As we analyzed in previous analysis of the DeepSeek supply shock, these developments suggest an impending shift in the AI industry value chain — from infrastructure providers toward vendors specializing in end-use applications.
Palantir's Strategic Positioning: AI Platform versus Traditional SaaS
Another compelling aspect to consider is that PLTR is not merely an AI application vendor; rather, it positions itself uniquely as an AI platform provider. PLTR enables enterprises to rapidly build, iterate, and scale their own AI-driven applications internally. This distinction is crucial, particularly amidst growing discussion among prominent AI investors who suggest AI could fundamentally disrupt — or even eliminate — the traditional SaaS model.
The core argument behind this "AI disrupting SaaS" thesis is straightforward: if AI significantly reduces the cost of software engineering — potentially by several orders of magnitude — large enterprises might find it more economically attractive to build and maintain their own customized software applications internally, rather than relying on external SaaS providers. To illustrate, we can look to China, where SaaS adoption remains relatively low among the largest enterprises (often state-owned) due primarily to the affordability of software engineering labor. It is economically rational for these enterprises to employ engineering teams internally to develop and maintain tailored software solutions instead of paying premium subscription fees to external SaaS providers.
Similarly, in the US, if AI-driven productivity gains substantially reduce the overall cost of software development and maintenance, it becomes increasingly feasible, particularly for large enterprises, to build their own customized SaaS solutions internally. Initiatives such as "8090 VC" underscore this emerging trend, suggesting that the market is indeed shifting in this direction.
However, despite recognizing this trend, we maintain a cautious stance regarding its wholesale adoption and viability. Several critical limitations persist when relying solely on AI-generated, quickly assembled SaaS solutions. These solutions typically lack robust security models, struggle to seamlessly integrate across diverse data sources and legacy systems, and often fail to provide robust, enterprise-grade rollout mechanisms and long-tail feature development.
Interestingly, these very shortcomings represent PLTR's strategic advantage. The company's AIP is specifically designed to address these limitations. Palantir’s AIP provides enterprises the capabilities necessary to rapidly and securely adopt internally-built, AI-powered SaaS applications. By ensuring robust security models, seamless integration across diverse data silos, and sophisticated feature management and rollout capabilities, PLTR is uniquely positioned to accelerate enterprise adoption of internally-developed AI applications.
Palantir's Government Business and the Advana Opportunity
Additionally, PLTR’s sizable government business continues to grow in strategic importance. In late 2024, the US government unveiled a significant 10-year, $15bn budget allocation to the Department of Defense's (DoD) Advana project — an integrated enterprise data analytics and visualization platform designed to unify and improve data visibility across DoD operations. Advana evolved from an earlier DoD initiative called "Vantage," originally developed by the DoD’s central finance office specifically to address persistent challenges around audit compliance and financial transparency. Notably, Vantage - built on Palantir’s software - has seen broad adoption far beyond its initial financial management use case, becoming widely utilized across multiple DoD divisions and functions, and effectively forming the backbone of the Advana initiative.
Despite some governmental reluctance to allow further penetration by a single vendor, DoD officials have explicitly stated their intent to maintain Advana as an open, diversified platform capable of integrating various third-party tools and solutions. Nevertheless, we believe there is limited practical alternative for the DoD. Legacy software solutions simply cannot match PLTR’s capabilities in terms of data integration, security, scalability, and ease-of-use. Consequently, with the return of a Trump administration — known for the idea of aggressively cutting legacy IT spending — we anticipate an even more favorable environment for Palantir, with legacy vendors potentially losing ground and Palantir capturing greater market share.
Given Palantir’s flexible, open architecture and universal data integration capabilities, the company is exceptionally positioned to serve as the backbone for Advana, positioning it to capture a substantial portion of the allocated $15bn budget. Furthermore, Advana represents just one among potentially numerous similar large-scale government projects in the future, further solidifying Palantir’s long-term growth trajectory in the public sector.
The biggest risk for Palantir remains the sustainability of the current momentum in the AI sector. Overall, we remain optimistic about AI's continued growth. However, there is greater uncertainty regarding NVIDIA's ability to consistently deliver growth and positive earnings surprises that prompt analysts to raise their forward projections. If a sudden shock occurs in AI-related spending — for instance, a sharp slowdown or halt in procurement of NVIDIA GPUs, leading to a cyclical revenue decline — this would likely trigger a broad correction across AI stocks. Although the timing and scale of such an event are uncertain, we consider this a non-zero probability risk that will inevitably materialize at some point in the future. The critical unknown is precisely when NVIDIA's revenue growth will peak or trough.
Palantir would be highly vulnerable to such an event, as its current valuation is heavily dependent on long-term growth potential and FCF. Investors should bear in mind that the main risk here is tied to macro developments in the AI industry, rather than idiosyncratic factors within Palantir itself.
Another potential concern is determining precisely when the current wave of AI technological advancements will peak. Before Grok-3, rumours were spreading that major AI labs had finished the training of their latest frontier models, but they were not willing to release the models because these models' performance gains were surprisingly low, which would shock people's conviction and confidence around AI's future potential and stump recent momentum. Indeed, this rumour is correct in the sense that OpenAI's recently rushed GPT-4.5 shows that after two years and a 30x increase in cost, OpenAI achieves only incremental improvements over its existing models. It's possible the industry stalls following what's going on with GPT-4.5, as essentially it implies the heavy capex in building the next cutting-edge frontier model will not yield a satisfactory ROI, meaning we could see a significant sell-off in both equity markets and the broader AI sector. However, recent developments such as TTT reasoning models and DeepSeek's efforts to enhance efficiency (increasing value per FLOP) suggest that the current AI wave may sustain another upward trajectory (S-curve) for approximately two more years.
The next critical milestone will likely occur when AI agent systems reach a stage of mainstream viability. Although the concept of intelligent AI agents has been widely promoted, it remains immature and limited mostly to non-mission-critical applications, such as customer support or software development tasks, where human supervision remains necessary. Despite impressive capabilities — such as Grok-3's reasoning model successfully solving 94% of AIME-level (considered among the world's hardest) math problems — current LLMs continue to struggle with sustained, complex tasks. For example, LLMs still fail to consistently answer fundamental mathematical comparisons (e.g., determining if 9.11 is larger than 9.8) and often break down unexpectedly when handling multi-step workflows involving multiple tools.
Currently, GPT-4o provides only moderate consistency, while o1 (with its more costly intermediate tokens) achieves somewhat higher reliability, though still insufficient for deployment in mission-critical enterprise use cases without human oversight. If advancements in agent tool capabilities, retrieval-augmented generation (RAG) accuracy, and other essential enterprise-grade features continue to progress significantly, Palantir should maintain its strong momentum for the foreseeable future.
https://scale.com/leaderboard
More recently, OpenAI also joined the agent stack play with its release of Responses API that supports LLM tool use, including three back-in tools - web search, file search, and computer use. Computer use basically allows the LLM to act like a human to interact with environments like browsers with mouse and keyboard instructions sent by the LLM. However, even for OpenAI with a specialized CUA model that achieved SOTA score for computer use, you can see, its scores are quite low and more similar to the performance of GPT-3 in general language tasks back in 2020. So we still have a long way to go here.
https://openai.com/index/new-tools-for-building-agents/
OpenAI Shifts Focus from Foundational Models to AI Applications and Consumer Interfaces
A key shift in OpenAI’s strategy is its increasing resemblance to a commercial tech company rather than a nonprofit AI research lab. This pivot is likely driven by growing investor concerns over the ROI of foundational model (FM) development, especially as competition intensifies with new entrants like DeepSeek and xAI delivering top-tier results. Investors worry that FM development requires massive capital outlays to build models that rapidly depreciate — often becoming obsolete within a few quarters as newer iterations emerge. As a standalone business, an FM lab faces an unattractive financial profile.
To justify its $150bn valuation and continue scaling ARR beyond $4bn, OpenAI must expand into higher-margin businesses. The most logical path is its ChatGPT app and website, which now boasts over 400 million MAUs and serves as a key pillar of its valuation beyond model research. In fact, OpenAI could eventually shift away from developing its own FMs altogether, instead leveraging models from DeepSeek, xAI, or others. This could even yield higher shareholder returns, as charging for model-serving interfaces — where gross margins can exceed 80% — is far more lucrative than FM development, where even best-case margins hover around 50%, and worst-case scenarios result in deep cash burn. With existing API revenue unlikely to fully recoup model training costs, OpenAI’s move toward AI applications and consumer interfaces represents a more financially sustainable strategy.
Source: The Information
Beyond dominating the consumer AI interface, OpenAI is increasingly building an enterprise AI platform stack to complement its foundational models. Initially, OpenAI relied on Microsoft, leveraging its mature enterprise GTM and Azure infrastructure. However, as Microsoft’s AI Studio failed to scale as quickly as expected, OpenAI has gradually moved to establish its own enterprise presence. This shift began with the Assistant API, announced at OpenAI’s first Dev Day in November 2023, and has since evolved into a broader agent stack designed to make OpenAI the default choice for AI application development.
OpenAI's approach to AI PaaS (Platform-as-a-Service) stems from its FM expertise, gradually adding platform capabilities to streamline AI development. In contrast, hyperscalers, Snowflake, and Databricks have built their AI PaaS offerings from their cloud data platform roots. Meanwhile, PLTR took an entirely different approach—starting from end-use applications and working backward to productize internal tools like SDDI, Ontology, and Foundry into a hardened enterprise AI stack.
For example, PLTR AIP enables multinational enterprises to integrate fragmented data silos and use RAG to feed relevant information to LLM agents. By contrast, OpenAI’s file search currently supports only 100GB of files, imposing constraints that require workarounds. Additionally, using OpenAI’s file search requires transmitting data outside the security perimeter—an issue for regulated industries like finance and healthcare, where strict compliance standards prevent sensitive data from being exposed to third-party AI services.
OpenAI’s AI PaaS strategy mirrors its consumer AI ambitions: securing a lucrative position in a high-value, defensible market while FM development becomes increasingly commoditized. This shift further underscores PLTR’s unique positioning within the AI stack.
OpenAI vs. PLTR AIP: Which Is Better?
Rather than a direct winner-takes-all battle, the competition between OpenAI and PLTR AIP depends on the use case. OpenAI's approach resembles Apple’s iPhone, starting with a refined but minimal set of features before gradually expanding capabilities over time. PLTR, however, is neither Android nor Apple—it is a category of its own. Decades of upfront R&D have made AIP highly polished and enterprise-ready from the start, but its agile development approach continues to introduce new features, some of which may still be maturing. Unlike Android or Databricks, PLTR maintains a rigorous governance and security layer, ensuring that all components are controlled, tested, and secure by default.
PLTR's platform is designed to handle diverse data sources, environments, and deployment scenarios, whereas OpenAI’s enterprise adoption is likely to be more limited to organizations with in-house AI teams that don’t require complex deployment or local model hosting. OpenAI’s solutions also pose challenges for enterprises that need strict security perimeters to prevent fine-tuning data from being shared externally.
The AI space is vast enough to accommodate multiple players, but PLTR’s breadth, maturity, market penetration, performance, and forward-deployed engineering give it a strong position to capture a majority share. One potential exception could be xAI, which is rapidly catching up—not just in FM development but also in consumer AI interfaces and AI stacks. Following the Grok-3 release, xAI’s AI assistant became the most downloaded in its category, and its next step is to develop a competitive AI dev stack to rival OpenAI while accelerating ARR growth.
Notably, xAI is now hiring forward-deployed engineers, borrowing from PLTR’s deployment model to drive adoption and enterprise readiness. xAI operates within the same tight-knit circle of founders, VCs, and talent as PLTR, given Elon Musk’s unofficial ties to the PayPal Mafia and his alignment with PLTR’s broader technology and policy views. It wouldn’t be surprising to see PLTR talent moving to xAI, reinforcing its ability to build credible enterprise AI products.
A Key Variable: Access to Frontier Models
A crucial factor for PLTR’s AIP will be access to frontier models — not necessarily the most powerful, but those that offer the best performance per dollar. In production use cases, a model that is 50% more performant than another but 10x more expensive is often less attractive if both can achieve the required outcomes. The ability to balance performance, cost efficiency, and deployment flexibility will be critical in determining which AI platforms dominate enterprise adoption.
Although building AIP by working backward from end-user applications has been highly successful for PLTR, there remains a risk in its lack of direct ownership in the FM layer. Currently, DeepSeek leads on the perf/$ front, and if this trend continues, it could pose a challenge for PLTR, given its hawkish stance toward China and the Chinese tech industry. If DeepSeek becomes politicized, it could impose restrictions on its open-source models—potentially modifying MIT licenses or introducing new ones to exclude military applications or require approval for commercial use by companies generating over $100 million in revenue.
If the US fails to maintain leadership in perf/$ for LLMs, leaving it to vendors that PLTR perceives as adversaries, or if the US does lead but with dominance concentrated under META, which could impose higher pricing, PLTR's future margin potential may come under pressure.
Ultimately, foundational model development will remain capital- and expertise-intensive, controlled by a small number of labs. PLTR's long-term success with AIP depends on the competitive dynamics and commoditization trajectory of the FM industry. If FM development follows the memory industry model, with aggressive competition and open-source initiatives driving commoditization, PLTR can continue to benefit from lower model costs. However, if the FM market consolidates into a few dominant players operating as an oligopoly, they may coordinate to raise prices, strengthening industry-wide profitability but squeezing PLTR’s margins while increasing competitive pressure from OpenAI.
This risk extends beyond language models. Continuous commoditization is also necessary across AI fields such as computer vision, real-world physical agents, video, audio, and computer interaction. If OpenAI or other integrated AI vendors control these models, they will gain a significant advantage in delivering end-to-end AI solutions. In contrast, PLTR—lacking a proprietary FM stack — could face challenges ensuring that foundation models do not become bottlenecks limiting overall system performance.