OpenAI released o3-pro for ChatGPT Pro and Team users in what it calls its most capable model yet as it cut the prices for o3 by 80%. The moves come as OpenAI CEO Sam Altman ponders 2030 where there limitations of energy will lead to AI superintelligence that's almost free.

That's a mouthful, but Altman and OpenAI are arguing that we're at an event horizon, an inflection point and probably a lot of revenue growth. For enterprises, the takeaway is that OpenAI expenses may be declining in exchange for volume. OpenAI's enterprise business surging, says Altman

Let's recap the headlines:

  • OpenAI dropped o3 pricing by 80%. For developers, o3 may not be the latest and greatest, but it'll be good enough for many use cases.
  • OpenAI is scaling as it breaks away from its Microsoft partnership. Reuters reported that OpenAI is going to use Google Cloud for compute. That move would give OpenAI a multi-cloud approach that should meet its needs better than an exclusive with Microsoft Azure.
  • OpenAI launched o3-pro. The company said: "In expert evaluations, reviewers consistently prefer o3-pro over o3 in every tested category and especially in key domains like science, education, programming, business, and writing help. Reviewers also rated o3-pro consistently higher for clarity, comprehensiveness, instruction-following, and accuracy."

That barrage of headlines, however, are overshadowed by Altman's blog, which laid out his latest thoughts on AI superintelligence, energy consumption and how much a ChatGPT consumes in resources per query today.

The post is worth a read. Here are a few takeaways.

Energy will be plentiful. Altman: "In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else."

But that ChatGPT usage isn't killing the environment today. Altman: "As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.)"

The self-reinforcing loops have already started and what was novel months ago is now routine. Altman: The economic value creation has started a flywheel of compounding infrastructure buildout to run these increasingly-powerful AI systems. And robots that can build other robots (and in some sense, datacenters that can build other datacenters) aren’t that far off."

Humans will adapt: Altman: "The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before. We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big."

Altman does note the challenges. He said society will have to solve "the alignment problem" where we can guarantee "AI systems learn and act towards what we collectively want." Altman said society will also have to make sure superintelligence is cheap and not concentrated with any one person. The world needs to start a conversation about what the boundaries are and get aligned.

My take

  1. Altman's take that cost and scale will be solved is believable, but we can debate the timeline for sure. Can energy grids be revamped in 5 years?
  2. The concept that society is going to have a reasonable discussion about superintelligence and get alignment on what we call collectively want from AI is naive if not batshit crazy. Governments on a global basis barely function now and there's a shortage of consensus.
  3. Societal impacts are glossed over throughout the post. Altman's take that humans will adapt may apply to a sliver of the population.
  4. This quote made me chuckle: "In the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes."
  5. This quote struck me as blasé: "We will figure out new things to do and new things to want, and assimilate new tools quickly (job change after the industrial revolution is a good recent example). Expectations will go up, but capabilities will go up equally quickly, and we’ll all get better stuff. We will build ever-more-wonderful things for each other. People have a long-term important and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don’t care very much about machines."
  6. Either way, Altman's right that potentially wonderful and wrenching change is coming. Both can be true.