Results

Airbnb: A look at its AI strategy

Airbnb has launched an AI agent built on 13 different models for customer service in its app and plans to add more AI tools in the quarters to come. The goal: Transform the Airbnb app into one that is AI native.

CEO Brian Chesky said on Airbnb's second quarter conference call that the company's move into complementary services paid off in the second quarter with better-than-expected results. Airbnb has retooled its tech stack and is now rolling out improvements to its app and services at a faster pace.

"We are massively ramping up development of product development pace at Airbnb. We do these typically biannual releases, but we are now iterating very, very quick even between these releases," said Chesky.

Airbnb is navigating multiple shifts. The company is expanding into complementary markets including home market services and experiences, shifting its marketing approach to focus more on social over search and TV and optimizing everything from pricing to customer service as it expands share globally.

In the second quarter, Airbnb delivered strong metrics as it expanded its AI-powered customer service agent to 100% of US users and scaled new offerings. The company said it saw travel demand accelerate from April to July despite an uncertain economy. Airbnb reported second quarter net income of $642 million on revenue of $3.1 billion, up 13% from a year ago.

As for the outlook, Airbnb said it expects revenue growth of 8% to 10% in the third quarter with stable nights and seats booked compared to the second quarter. The company did say it expects lower margins due to investments in new markets including Airbnb Services and Airbnb Experiences.

In addition, Airbnb redesigned its app to make bookings across services in one place. The company has optimized continually since the May launch. That product cadence will be necessary as Airbnb carries out its AI strategy. Chesky outlined Airbnb's approach to AI and agents. Here's a look:

Start with the hardest problem. Chesky said most AI efforts in travel have revolved around trip planning and inspiration. Airbnb has gone with customer service as an AI use case because it is "the hardest problem because the stakes are high, you need to answer this quickly and the risk of hallucination is very high."

"You cannot have a high hallucination rate. And when people are locked out, they want to cancel reservation, they need help, you need to be accurate. And so what we've done is build a custom agent built on 13 different models that have been tuned off of tens of thousands of conversations. We rolled this out throughout the United States in English. And this has reduced 15% of people needing to contact a human agent when they interact instead with this AI agent," said Chesky.

The plan now is to bring that customer service agent to more languages, he added.

Increase personalization and context. Chesky said the customer service AI agent will become "more personalized and more agentic" throughout the next year. "The AI agent will not only tell you how to cancel your reservation, but it will also know which reservation you want to cancel, cancel it for you and it can start to search and help you plan and book your next trip."

Expand AI into travel search and planning from customer service. Chesky said the plan is for Airbnb to leverage AI throughout its use cases and app. Airbnb is looking at multiple expansion areas including hotel bookings "especially boutiques in bed and breakfast" and independents in Europe.

Become AI native. Chesky said Airbnb will become an AI-native app. He said today, the top 50 applications on Apple's App Store aren't AI native. ChatGPT is the top app with a few other AI natives, but for the most part the top players are not AI-native.

"You've got basically AI apps and kind of non-AI native apps. And Airbnb would be a non- AI native application.

Over the next couple of years, I believe that every one of those top 50 slots will be AI apps--either start-ups or incumbents that transform into being AI native apps. And I think at Airbnb, we are going through that process right now of transitioning from a pre- generative AI app to an AI native app. We're starting to customer service. We're bringing into travel planning. So it's really setting the stage."

Here's a look at the characteristics of AI natives per the Constellation Research grid. 

Stay focused and strategic. Chesky said it's premature to think of AI chatbots and agents as the Google replacement. The concept of one AI tool to rule all isn't proven. Chesky's bet is that there will be specialized AI models in categories.

Chesky said that ChatGPT is a great product, but the approach isn't exclusive to OpenAI. He said:

"Airbnb can also use the API, and there are other models that we can use. In the coming years, you're going to have a situation where these large AI models can take more and more, and more things will start there, but people won't often go to one chatbot.

You're going to also have start-ups that are going to be custom-built to do a specific application, and you're going to have incumbents that make a shift to AI. It's not enough to just have the best model. You have to be able to tune the model and build a custom interface for the right application.

The key thing is going to be for us to lead and become the first place for people to book travel on Airbnb. As far as whether or not we integrate with AI agents, I think that's something that we're certainly open to. Remember that to book an Airbnb, you need to have an account, you need to have a verified identity. Almost everyone who books uses our messaging platform. So I don't think that we're going to be the kind of thing where you just have an agent or operator book your Airbnb for you because we're not a commodity. But I do think it could potentially be a very interesting lead generation for Airbnb."

Data to Decisions Innovation & Product-led Growth Marketing Transformation Matrix Commerce Next-Generation Customer Experience Chief Information Officer

Coffee Corner: SAP Quarterly Updates with Holger Mueller & Martin Fischer

Join Holger Mueller and Martin Fischer for another #CoffeeCorner ☕ discussion about SAP's Q2 2025 #earnings call and other updates.

Highlights include:

📌 SAP's Business Data Cloud partnership with Databricks
📌 WalkMe acquisition enables #AI integration across platforms
📌 Enterprise architects now supporting Rise customers
📌 ABAP LLM showing promise for legacy code migration

AI is transforming #enterprise software - are you ready? 

On data_convos <iframe width="560" height="315" src="https://www.youtube.com/embed/YWGfcojknY8?si=dWOggvgS0fPzDAgP" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Cohere North generally available

Cohere North, a collaborative agentic AI platform, is generally available following testing by a bevy of large enterprises. Cohere launched Cohere North in January in a move that aims to broaden the company's reach beyond large language models (LLMs).

The availability of Cohere North is part of a larger trend by foundational model companies to surround models with workflows and enterprise use cases. A big selling point of Cohere North is that it can be deployed privately to ensure privacy of enterprise data.

Enterprises that have used Cohere North to deploy AI agents include RBC, Dell, LG CNS, Ensemble Health Partners and Second Front. Cohere recently announced a strategic partnership with Bell Canada to provide full-stack sovereign AI solutions for government and enterprise customers across Canada, and to deploy proprietary, secure AI solutions within Bell. That partnership will improve Cohere North.

In addition, Cohere and RBC developed North for Banking, which is a configuration designed for financial services. North has also been deployed withing LGS CNS in South Korea. Dell also includes Cohere North in its Dell AI Factory stack.

Cohere North includes the following:

  • Generative and search models.
  • Customizable agents.
  • Built-in workflow automations.
  • Integration with enterprise data across systems and services.
  • An architecture that enables Cohere North to run privately with as few as two GPUs.

Features of Cohere North include:

  • Chat and search across data repositories and content.
  • Custom tools and integrations with Gmail, Slack, Salesforce, Outlook and SharePoint.
  • The ability to integrate with any Model Context Protocol (MCP) server.
  • Asset creation of documents, financial reports and research.
  • Automated workflows for processes.

According to Cohere, Cohere North is designed for secure deployments across on-premises infrastructure, hybrid clouds, VPCs and air gapped environments. Cohere is aiming for regulated industries with Cohere North.

Cohere North incudes access controls and permissions, autonomy policies, security testing, system observability, flexible deployment options and compliance with various regulations.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

OpenAI's open weight models gain AWS distribution: Why it matters

OpenAI released two open-weight models--gpt-oss-120b and gpt-oss-20b--but the real news revolves around distribution. These two new OpenAI models have a wide distribution including availability on Amazon Web Services for the first time.

The new OpenAI models, which can run locally, on-device and through third party providers, are not-surprisingly available on Microsoft Azure, but also Hugging Face, vLLM, Ollama, llama.cpp, LM Studio, AWS, Fireworks, Together AI, Baseten, Databricks, Vercel, Cloudflare and OpenRouter.

For good measure, OpenAI said it worked with Nvidia, AMD, Cerebras and Groq to optimize the models.

In a blog post, AWS said gpt-oss-120b and gpt-oss-20b will be available in Amazon Bedrock and Amazon SageMaker JumpStart. The models will also be available in frameworks for AI agent workflows such as AWS' Strands Agents.

The models and why they matter

According to OpenAI, the gpt-oss-120b model "achieves near-parity" with OpenAI o4-mini on core reasoning benchmarks running on a single 80 GB GPU. The gpt-oss-20b model delivers similar results to OpenAI o3-mini and can run on devices with 16 GB of memory for local inference.

OpenAI also outlined safety efforts and early development with AI Sweden Orange and Snowflake. The model card has all the details, but the primary takeaway is that OpenAI's open-weight models perform well.

You'd be forgiven for glazing over a bit with OpenAI's latest models. Alibaba's Qwen just released a new image model, Anthropic released Claude Opus 4.1 and improvements to Opus 4 and Google DeepMind launched Genie 3, which can generate 3D worlds.

Simply put, it's another day, another model improvement complete with various charts of benchmarks.

The rise of good enough LLMs

The big picture for OpenAI revolves around distribution and enterprise throughput. The AWS availability is a win for OpenAI, may open some doors and more importantly adds an option for enterprises.

The LLM game has become one that revolves around spheres of influence. Microsoft Azure has plenty of model choices, but is clearly aligned with OpenAI. Amazon is a big investor in Anthropic. Google Cloud is all about Gemini, but also has a lot of choice including Anthropic. OpenAI models to date have been available either direct or through Microsoft.

Add it up and OpenAI has to be available on multiple clouds so the open weight move makes a lot of sense.

Here's why? OpenAI can't afford to lose the enterprise because that's what'll pay the bills. Sure, OpenAI may upend Google in search. Yes, there are even OpenAI devices on tap for consumers. But the real dough will be in the enterprise.

The problem? Enterprises are going to value price performance and practical applications with real guardrails. That's why I find AWS' practical approach so interesting. It may cause some consternation among analyst types, the practical appeal is what CxOs want and need if they're going to take AI agents to production.

According to Menlo Ventures, Anthropic has 32% share of enterprise AI followed by OpenAI at 25% and Google at 20%. And Google's Gemini models are all over the enterprise as Google Cloud makes a big play to be the AI cloud layer. In a few months, it's highly likely that enterprise models will revolve around Anthropic and Google's Gemini family. We're in the era of good-enough LLMs and enterprises simply want the best model for the use case.

OpenAI's launch of open-weight models with broader distribution can keep it in the game. Also keep in mind that AI spending is going to revolve around inference in the enterprise.

We can haggle over benchmarks and performance all day, but like all things enterprise distribution matters.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity openai amazon ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AMD Q2 on target on strength of data center, PC chips

AMD's second quarter was in line with expectations as its data center unit delivered sales growth of 14%. The company said US restrictions on exports to China hurt sales.

The company reported second quarter earnings of $872 million, or 54 cents a share, on revenue of $7.68 billion, up 32% from a year ago. Non-GAAP earnings were 48 cents a share.

Wall Street was looking for non-GAAP second quarter earnings of 48 cents a share on revenue of $7.43 billion.

AMD said the quarter was hit by US government export control on its AMD Instinct MI308 data center GPUs. Those restrictions led to an $800 million inventory charge.

Dr. Lisa Su, AMD CEO, said the company saw record server and PC processor sales and "robust demand across our computing and AI product portfolio." Su added that AMD expects more market share gains for its EPYC and Ryzen processors. Those chips are mostly benefiting from Intel's malaise. Intel recently reported a rocky quarter.

Key figures in the second quarter include:

  • Data center revenue was $3.2 billion in the second quarter, up 14%.
  • Client and gaming revenue was $3.6 billion, up 69% from a year ago. PC chips surged due to AMD's latest Ryzen desktop processors. Gaming revenue surged 73% due to semi-custom revenue and AMD Radeon GPU demand.
  • Embedded revenue was $824 million, down 4% from a year ago.

As for the outlook, AMD said third quarter revenue will be about $8.7 billion, give or take $300 million. The outlook doesn't include any revenue from AMD Instinct MI308 shipments to China.

Speaking on an earnings conference call, Su said:

  • "There are now nearly 1,200 EPYC cloud instances available globally and providers continue expanding both the breadth and regional availability of their AMD offerings. EPYC enterprise deployment grew significantly from the prior quarter, supported by new wins with large technology, automotive manufacturing, financing services and public sector customers."
  • "Our sovereign AI engagement accelerated in the quarter as governments around the world adopt AMD technology to build secure AI infrastructure and advance their economy. As one example, we announced a multi billion dollar collaboration with Germany to build AI infrastructure powered entirely on AMD GPUs, CPUs and software."
  • "Demand for AMD powered notebooks was strong with sales growing by a large double digit percentage year over year. We drove a richer mix of higher ASP mobile cards year over year, as we expanded our share in a premium notebook segment where our Ryzen AI 300 CPUs deliver leadership, performance and value for both general purpose and AI workloads. We also closed new enterprise wins with Forbes 2000 pharma, tech, automotive, financial services, aerospace and healthcare companies."

Holger Mueller, an analyst at Constellation Research, said:

"AMD can't catch the break as it can't ignite data center revenue with its AI chips. And while it's fair they maybe affected by tariffs and export controls, the writing was on the wall. That client and gaming are the growth engine to save the quarter is good overall but doesn't help Su and team to play the data center and AI boom. Next quarter will be key the formation of AMD's personality."

Data to Decisions Tech Optimization AMD Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Monday's Musings: Here Come the AI Exponentials

Media Name: @rwang0 Logan Voss Unsplash Vibrant Purple Swirls.png

Legacy Companies are Thinking in the Wrong Scale

Whatever gains expected from digital transformation will be blown to shreds by AI Exponentials at a logarithmic scale not seen since the advent of the internet.  While at first this may sound like more AI hyperbole, the early indications for organizations who begin their journey as AI Natives by design show a tremendous advantage versus the AI Enabled who have to reduce their legacy technology, cultural, and financial debt.

Constellation sees a progression for AI maturity that begin with AI Luddites and ends with AI Exponentials.  Here are the defining characteristics (see Figure 1.)

Source: Constellation Research, Inc.

Inside the Ten Attributes of AI Exponentials

  1. Level of AI investment. Organizations must overcome technical debt and then invest at double or triple their current rates in AI fundamentals such as data strategy, decision automation, and language models.
  2. Stage of AI maturity. Constellation’s AI maturity ranges from augmentation, acceleration, automation, agentic, and autonomous.
  3. Value of data.  How organizations appreciate their data reflects their maturity in AI.  Those organizations who take their data for granted will differ in how they source, curate, nurture, and renew their data.
  4. Monetization models. Traditional businesses monetize the sale of products and services.  Greater levels of digitization enable experiences, outcomes, and revenue share.
  5. Means of production. This attribute assigns the default assumption of how work gets done and ranges from full human involvement to full digital labor.
  6. Agentic usage. Organizations range from no agents to multiple agents and multiple platforms.  The goal is to create an agentic ecosystem.
  7. Machine scale. This attribute addresses the percentage of human led or machine led output.
  8. Profit per employee.  As AI investments increase, expect digital labor to create massive leverage as profit per employee grows.
  9. Growth expectations.  AI ushers in an era of exponential efficiency and growth. Expectations move from double digit percentage growth to a ten times and hundred times growth.
  10. Partnerships. Partnerships mature from technology providers and  enablers, to new data signals and data collectives which share data.  The most advanced AI Exponentials build Data, Inc. business models that monetize data in complex ecosystems.

The Bottom Line: AI Exponentials are Here

AI Natives and AI Exponentials have taken the market by storm.  In three years, Constellation expects the first $1 billion services company to be staffed by 1000 FTEs.  A 100 person software company will take out a $100 billion ARR company in the next three years.  The first single person $1B ARR company will arrive within five years.  Companies that are driving millions of ARR with few employees dominate the new landscape. With AI intelligence doubling every 7 months, the pace of innovation has never been faster.  Constellation Research is tracking these new companies as the Age of AI completely changes the landscape.

Your POV

Have you made the frameshift to exponential thinking? Will you be ready for AI Exponentials?  When will you choose an AI Exponential over a legacy vendor?

Add your comments to the blog or reach me via email: R (at) ConstellationR (dot) com or R (at) SoftwareInsider (dot) org. Please let us know if you need help with your strategy efforts. Here’s how we can assist:

  • Working with your boards to keep them up to date on technology and governance.
  • Connecting with other innovation minded leaders
  • Sharing best practices
  • Vendor selection
  • Implementation partner selection
  • Providing contract negotiations and software licensing support
  • Demystifying software licensing

Reprints can be purchased through Constellation Research, Inc. To request official reprints in PDF format, please contact Sales.

Disclosures

Although we work closely with many mega software vendors, we want you to trust us. For the full disclosure policy,stay tuned for the full client list on the Constellation Research website. * Not responsible for any factual errors or omissions.  However, happy to correct any errors upon email receipt.

Constellation Research recommends that readers consult a stock professional for their investment guidance. Investors should understand the potential conflicts of interest analysts might face. Constellation does not underwrite or own the securities of the companies the analysts cover. Analysts themselves sometimes own stocks in the companies they cover—either directly or indirectly, such as through employee stock-purchase pools in which they and their colleagues participate. As a general matter, investors should not rely solely on an analyst’s recommendation when deciding whether to buy, hold, or sell a stock. Instead, they should also do their own research—such as reading the prospectus for new companies or for public companies, the quarterly and annual reports filed with the SEC—to confirm whether a particular investment is appropriate for them in light of their individual financial circumstances.

Copyright © 2001 – 2025 R Wang and Insider Associates, LLC All rights reserved.

Contact the Sales team to purchase this report on a a la carte basis or join the Constellation Executive Network,

Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Innovation & Product-led Growth Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Insider Associates AR ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Leadership Chief Analytics Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Information Security Officer Chief Marketing Officer Chief People Officer Chief Privacy Officer Chief Procurement Officer Chief Product Officer Chief Revenue Officer Chief Supply Chain Officer Chief Sustainability Officer Chief Technology Officer Chief AI Officer Chief Experience Officer

Thomson Reuters brings agentic AI to legal workflows

Thomson Reuters launched CoCounsel Legal with Deep Research and guided workflows, an AI agent designed to answer legal questions, develop reports, draft reports and feature workflows for discovery and depositions.

The launch highlights how agentic AI can be utilized in industries such as legal and evolve from tools that require prompts to agents that can take on tasks.

Thomson Reuters has a bevy of legal products and services including CoCounsel, a generative AI assistant for legal and accounting professionals, Westlaw and Amlaw. Thomson Reuters has more than 20 billion legal documents that were used to train its models.

That Thomson Reuters data and reasoning models are complemented by domain experts. CoCounsel Legal will be a separate product addressing outcomes in litigation and be embedded into other services such as Westlaw Advantage.

Key items:

  • CoCounsel Legal includes Deep Research, which is grounded with Thomson Reuters content and tools such as Westlaw Advantage.
  • CoCounsel Legal is built to reason, plan and deliver legal research.
  • The AI agent is designed to understand process, sourcing of answers and argument foundations.
  • CoCounsel Legal can generate multi-step research plans, trace logic, delivered Westlaw citation-backed reports and draft complaints, discovery requests and responses and operate with humans in the loop.
  • Thomson Reuters said it tested CoCounsel with Deep Research with more than 1,200 customers and attorneys.

According to Thomson Reuters, more than 12,200 law firms, 4,900 corporate legal departments and the majority of top US Courts and Am Law 100 firms use CoCounsel.

I caught up with David Wong, Chief Product Officer at Thomson Reuters, and Omar Bari, VP of Applied Research at Thomson Reuters Labs, to talk about the approach to CoCounsel Legal with Deep Research and guided workflows. Here are the key points:

A model and cloud agnostic approach. CoCounsel Legal with Deep Research uses multiple models that are best suited for the task at hand. Thomson Reuters contracts with OpenAI, Google and Anthropic for foundational models and the big four cloud providers, said Wong.

Developing multi-agent systems. Bari said the multi-agent system behind CoCounsel Legal with Deep Research features agents for research, planning, discovery and orchestrating workflow. "We needed to multiple agents and the ability to launch in parallel," said Bari.

Bari said:

"We knew pretty quickly that we wanted to build a custom agent system for legal deep research that lets agents navigate Westlaw content like an expert researcher would. And that meant taking a lot of the rich content that we have in Westlaw and making it available via tools to agents and then using the best frontier models for the job." 

The importance of process. Wong said Thomson Reuters built CoCounsel Legal with Deep Research based on the process that's used by trained legal researchers. "Legal research is taught in law school as a discipline. Legal research is often different because you're trying to often support or to critique an argument or some type of legal proceeding," said Wong. "We mimicked the process used by trained researchers and the agent is autonomous. Humans trained the models and created the process steps." When the system runs it is fully autonomous, but humans are focused on validation, quality and evaluation.

Build your orchestration layer. Bari and Wong said Thomson Reuters built its own orchestration framework for CoCounsel Legal with Deep Research. "We built the agent scaffolding ourselves," said Bari. The effort required continuous tweaking for high quality function calling, instructions, evaluation, memory management and orchestration, he added.

There are agent orchestration offerings, but most are first generation. "It's pretty easy to get to a prototype or demo, but production requires a lot of work on the details and each piece of orchestration," said Bari.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Palantir's software revolution: Forget sales people, let value do the talking

Palantir CEO Alex Karp is a bit opinionated and garners his share of haters. But the returns on Palantir are attracting enterprises to the point where word of mouth among customers scales.

In a stellar second quarter, Palantir gave investors a little bit of everything. Palantir delivered revenue growth of 48% in the second quarter as US commercial and government sales surged. The company also raised its outlook.

Palantir reported second quarter earnings of $327 million, or 13 cents a share, on revenue of $1.004 billion, up 14% from a year ago. Wall Street was looking for non-GAAP earnings of 14 cents a share on revenue of $939.5 million.

In a shareholder letter, Karp quoted himself and C.S. Lewis and argued that his company "will become the dominant software company of the future."

Palantir reported second quarter US revenue of $733 million, up 68% from a year ago. US commercial revenue was $306 million, up 93% from a year ago. US government revenue was up 53% from a year ago to $426 million. The company closed 157 deals of at least $1 million and 66 of at least $5 million. Palantir landed 42 deals worth more than $10 million.

As for the outlook, Palantir projected third quarter revenue of $1.083 billion and $1.087 billion with adjusted income from operations of $493 million to $497 million. For 2025, Palantir projected revenue between $4.142 billion to $4.15 billion. US commercial revenue for 2025 will top $1.302 billion, up about 85% from a year ago. Adjusted income from operations will be $1.912 billion and $1.92 billion.

While the numbers shined, Palantir’s conference call with analysts featured a bevy of insights. Here’s a look:

AIP and enterprise traction

Ryan Taylor, Chief Revenue and Legal Officer, said enterprises are using the company's software to make LLMs work as they should. "LLMs simply don't work in the real world without Palantir. This is the reality fueling our growth," said Taylor.

He cited customers such as Fannie Mae, Citibank, Nebraska Medicine and Lear as customers that are seeing strong returns. Taylor also noted Palantir's AIP is gaining traction at a rapid clip. This commercial momentum started to bubble up in late 2023 with Palantir's boot camps for AIP.

Here Come the AI Exponentials

"Lear Corporation recently signed a 5-year extension. Over the past 2.5 years, they have leveraged foundry and AIP to support over 11,000 users and more than 175 use cases, including proactively managing their tariff exposure, automating multiple administrative workflows and dynamically balancing their manufacturing lines," said Taylor.

Ontology and AI FDE

Shyam Sankar outlined enterprises that have replatformed on Palantir, but was largely referring to moving to the company's data ontology with enterprises using the company's AI FDE (Forward Deployed Engineer). A FDE is an engineered focused on deploying Palantir and moving customers to value quickly. AI FDE, launched last month at DevCon 3, is an autonomous agent that will run Palantir's AIP platform delegate tasks and optimize as needed.

Sankar said:

"A substantial development over the last couple of quarters is the realization and acceleration of our vision of Ontology web services as an architectural concept for our customers. AIP isn't just software our customers use, it's software, our customers are building their software on. Software companies are re-platforming away from the highly unopinionated services and building blocks of the hyperscaler stack onto AIP with its highly opinionated building blocks that get you to value 10x faster."

Yes, Palantir's argument is that building blocks in your tech stack need to be opinioned and decisive.

Sankar also noted that Palantir's investment in AI FDE is driving time to value.

"AI FDE is designed to enable autonomous execution across a wide array of tasks, including creating and editing ontology, building data transforms, creating functions, debugging issues and building applications. With its own closed-loop error handling, AI FDE can identify and correct issues and notify human users if needed, and it's been designed for seamless collaboration with humans in the loop through integration with AIPs branching," said Sankar.

The end of software sales?

Karp was asked whether Palantir could continue to grow without a direct sales force.

His answer was clear: Palantir isn't going to load up on sales people. No fancy dinners. No effort to "convince you to buy something."

Karp added:

"Our primary sales force now, and I think likely in the future, are going to be current customers telling other customers."

A sales army would just diminish Palantir's credibility. "Yes, you don't have 10,000 people roaming around selling something they don't understand. But the advantage is we go from once we come in the door, we come in with enormous credibility," said Karp. "The person we're selling to believes we will make them a lot of money, save them expenses or we will make their soldiers safer and more lethal."

Because the word of mouth around Palantir's value is good, the company can come in with higher level discussions with CxOs, said Karp, who said the game is about value creation more than software. Karp indirectly took jabs at SAP on the earnings call. 

While Palantir isn't going all-in on direct salespeople, it is building out its network with systems integrators including Deloitte, Accenture, Booz Allen and a bevy of others.

Time to value and outright cockiness

To say Karp and Palantir are lightning rods would be an understatement.

Karp, never one shy about an opinion or two, said the ROI generated with Palantir will do the talking. "I've been cautioned to be a little modest about our bombastic numbers, but honestly, there's no authentic way to be anything, but have enormous pride and gratefulness about these extraordinary numbers," said Karp.

Karp noted that Palantir's ability to push the Rule of 40 score to 94% shows the company is firing on all cylinders.

"There are almost no parasitic elements to this company. We have a small sales force. We have very little BS internally. We have a flat hierarchy. We have the most qualified interesting people heterodox on their beliefs," said Karp.

Karp said Palantir has earned the right to say what it wants. The company is teaching its customers how to attain its unit economics and telling CxOs: "If you want to have your first amendment rights to an opinion again, get our unit economics and then you too can say things that are true in public like we do."

Data to Decisions Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Palantir's Q2: Growth in US commercial, government accelerates

Palantir delivered revenue growth of 48% in the second quarter as US commercial and government sales surged.

The company also raised its outlook.

Palantir reported second quarter earnings of $327 million, or 13 cents a share, on revenue of $1.004 billion, up 14% from a year ago.

Wall Street was looking for non-GAAP earnings of 14 cents a share on revenue of $939.5 million.

In a shareholder letter, Palantir CEO Alex Karp quoted himself and C.S. Lewis and argued that his company "will become the dominant software company of the future."

By the numbers for the second quarter:

  • Palantir reported second quarter US revenue of $733 million, up 68% from a year ago. US commercial revenue was $306 million, up 93% from a year ago. US government revenue was up 53% from a year ago to $426 million.
  • The company closed 157 deals of at least $1 million and 66 of at least $5 million.
  • Palantir landed 42 deals worth more than $10 million.
  • In the quarter, Palantir closed $2.27 billion of total contract value, up 140% from a year ago.

As for the outlook, Palantir projected third quarter revenue of $1.083 billion and $1.087 billion with adjusted income from operations of $493 million to $497 million.

For 2025, Palantir projected revenue between $4.142 billion to $4.15 billion. US commercial revenue for 2025 will top $1.302 billion, up about 85% from a year ago. Adjusted income from operations will be $1.912 billion and $1.92 billion.

 

Data to Decisions Chief Information Officer

Wayfair starts to reap rewards from optimization, tech replatforming efforts

Wayfair has optimized its technology and operations to the point where it can grow both its top and bottom lines.

The home retailer delivered net income of $15 million, or 11 cents a share, on revenue of $3.3 billion, up 5% from a year ago. Non-GAAP earnings for the company were 87 cents a share.

For Wayfair, the results were the best since 2021. Wayfair has suffered from a Covid pandemic boom and bust cycle. Niraj Shah, CEO of Wayfair, said "we can and will grow profitably while taking significant share in the market."

A big part of that profitability push has revolved around a technology overhaul that largely revolved around a move to Google Cloud. When we last checked in with Wayfair, it was in the middle innings of a replatforming. Now that work is largely complete.

Shah added that Wayfair aims to invest in the future, grow current profitability and maximize free cash flow in the long run. Those goals will require continuous optimization, efficiency, AI and new features.

"Our model allows us to service the products with the best value for our customers, enabling us and our suppliers to gain share and grow revenue," said Shah.

Shah described the furniture and home goods market as "stable-ish." The higher-end market is stronger than the mass market, but overall demand is "bumping along the bottom after a few years of declines."

Here's a look at Wayfair's big initiatives and how it is flowing through to the bottom line.

Supply chain and logistics. Wayfair has an inventory light approach to its supply chain, but CastleGate, the company's proprietary logistics network, is performing well. The network covers inbound logistics, storage and outbound fulfillment.

CastleGate Forwarding is an inbound logistics and ocean freight forwarding operation that gives suppliers volume rates with carriers. Wayfair consolidates goods to ship and smaller suppliers have been increasing CastleGate usage.

According to Wayfair, CastleGate Fowarding has seen a 40% year over year increase in total volume in the second quarter and long-term inbound commitments are up 30% from a year ago. Wayfair is growing revenue by offering a third-party logistics service tailored to the home category.

Replatforming to Google Cloud is now mostly complete. CFO Kate Gulliver said that Wayfair's second quarter free cash flow of $230 million was its strongest since the third quarter of 2020. Capital expenditures were lower due a technology restructuring after the replatforming.

Customer and supplier experience improvements. Shah said that Wayfair has a 2,500 person technology organization that has been focused on the replatforming of core systems to Google Cloud. Now that migration is complete, that team is focused on increase product velocity and innovation.

"Now that we're very far into that replatforming effort, a lot of the cycles of the team are now back building features and functions to improve the customer experience and supplier experience," said Shah. "And you see that affect things, whether it's conversion rates, enabling suppliers to do more, launching new genAI-powered features and delivering efficiency gains in our operations."

Surgical ad spending with a focus on ROI. Wayfair in the fourth quarter began investing in influencers on Instagram and TikTok. Shah said that influencer investment has performed well, but the spend is modest.

More importantly, Shah said Wayfair has lowered ad costs through "a lot of testing and enhancements to some of our measurement models." "We've also been able to identify pockets of our spend, which we do not believe were contributing at the economic payback we wanted," said Shah. "Even though they were creating some revenue for us, it was not at a cost level that would make sense to us."

AI: Generative and agentic. Shah said there are multiple consumer facing areas where experience is being improved by generative AI. Search results, product descriptions and imagery, accuracy and other areas are just a few.

Shah said Wayfair in the long run is looking at using AI and agents to guide customers because "there's a lot more product discovery and content around trends." Wayfair is also developing features like Decorify and Muse to give shoppers personalization based on price and style.

The company is also looking at partnerships and ways to work with LLM companies such as OpenAI, Google Gemini and Perplexity, said Shah.

Wayfair has spent recent years rightsizing its organization and teams. That restructuring has been a distraction, but now Wayfair teams can focus on new programs including Wayfair Rewards, a loyalty program, Wayfair Verified, a set of goods that has been hand selected and inspected by Wayfair, and logistics efforts.

"The recipe keeps getting better, the technology cycles are available to drive the business forward, and we've been launching and growing new programs," said Shah.

Data to Decisions Marketing Transformation Matrix Commerce Next-Generation Customer Experience Innovation & Product-led Growth Sales Marketing Revenue & Growth Effectiveness B2B B2C CX Customer Experience EX Employee Experience business Marketing eCommerce Supply Chain Growth Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Social Customer Service Content Management Collaboration M&A Enterprise Service AI Analytics Automation Machine Learning Generative AI Chief Information Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Growth Officer Chief Marketing Officer Chief Product Officer Chief Revenue Officer Chief Technology Officer Chief Supply Chain Officer