OpenAI CEO Sam Altman said the company's enterprise unit is doing well as businesses continue to invest in large language models and increasingly AI agents.
Speaking at Snowflake Summit 2025, Altman said the enterprises that have learned to iterate quickly are doing best. The challenge for enterprises is that AI is changing so quickly and that usually favors the agile, said Altman, who appeared on stage with Snowflake CEO Sridhar Ramaswamy.
"There's still a lot of hesitancy, and the models are changing so fast, and there's always a reason to wait for the next model," said Altman. "But when things are changing quickly, the companies that have the quickest iteration speed, make the cost of mistakes low and have a high learning rate win."
He added that enterprises are clearly making early bets.
Altman said that a year ago, he would have recommended startups run toward generative AI and enterprises should wait for more maturity and opt for pilots over production. Today, generative AI is more mainstream and OpenAI's enterprise business is seeing strong demand.
- Google Gemini vs. OpenAI, DeepSeek vs. Qwen: What we're learning from model wars
- Snowflake adds OpenAI models to Cortex AI via expanded Microsoft partnership, integration
"Big companies are now using us for a lot of stuff. What's so different? They say it just took a while to figure it out. That's part of it. But the models just works so much more reliably. It does seem like sometime over the last year we hit a real inflection point for the usability of these models," said Altman.
Altman's comments landed a few days ahead of OpenAI's rollout of connectors to Dropbox and OneDrive for ChatGPT Team, Enterprise and Education users. The company also said Model Context Protocol (MCP) support is coming to Pro, Team and Enterprise accounts.
OpenAI said it has 3 million paying business users, up from 2 million in February.
Altman added:
"I think we'll be at the point next year where you can not only use a system to automate products and services, but the models will be able to figure out things that teams of people on their own can't do. And the companies that have gotten experience with these models are well positioned for a world where they can use an AI system to solve the most critical project. People who are ready for that, I think will have another big step change next year."
According to Altman, LLMs are more like interns today, but at some point soon they will be "more like an experienced software engineer."
"You hear about companies that are building agents to automate most of their customer support, sales and any number of other things. You hear people who say their job is to assign work to agents and look at quality and see how it fits together as they would with a team of relatively junior employees. It's not evenly distributed yet, but that's happening. I would bet next year that in some limited cases, at least in some small ways, we start to see agents that can help us discover new knowledge or can figure out solutions to business problems that are non-trivial. Right now, enterprises are focused on repetitive cognitive work to automate over a short time horizon. As that expands to longer time horizons and higher and higher levels, you get an AI scientist, an AI agent that can discover new science. That will be a significant moment in the world."
Other takeaways from Altman:
The ideal model. Altman said the ideal is "a very tiny model that has superhuman reasoning capabilities." "It can run ridiculously fast and 1 trillion tokens of context and access to every tool you can possibly imagine. And so it doesn't kind of matter what the problem is. Doesn't matter whether the model has the knowledge or the data in it or not," said Altman, who noted that framework isn't something that OpenAI is about to ship.
Altman added that using the models as a database is "sort of ridiculous" and expensive.
Prioritizing compute. Altman said that enterprises using the latest models are seeing real returns and you could solve hard problems with unlimited compute, but that's not realistic. Companies will get to the point where they will "be willing to try a lot more compute for the hardest problems and most valuable things," said Altman.