Results

IBM TechXChange 2025: Big blue connects agentic AI, mainframe dots, partners with Anthropic

IBM TechXChange 2025: Big blue connects agentic AI, mainframe dots, partners with Anthropic

IBM said its Spyre AI accelerator and Telum II processor are generally available, outlined a partnership with Anthropic and layered agentic AI tools throughout its offerings.

The news outlined at IBM's Tech Xchange 2025 in Orlando revolved around operationalizing AI in the enterprise. Here's a look at the notable news.

IBM Spyre Accelerator became generally available for IBM z17 will bring LLMs to IBM Z mainframe environment. IBM also said its software products including  IBM watsonx Assistant for Z, AI Toolkit for IBM Z and IBM LinuxONE, and Machine Learning for IBM z/OS will use Spyre for on-prem deployments.

The Spyre Accelerator has 32 AI-optimized processing cores to support LLMs on the mainframe. Combined with IBM's Telum II processor, the company said its Z platform can process up to 450 million inference operations using multiple AI models for credit card fraud detection.

IBM said it will layer Anthropic's Claude LLMs into its software portfolio starting with its latest integrated development environment (IDE). The two companies are aiming to use Claude throughout the enterprise software development lifecycle. IBM said more than 6,000 early adopters in the company are using the new IDE, which is in preview with IBM customers.

The IDE, called Project Bob, includes tools for application modernization, code generation and review and security embedded into workflows.

IBM watsonx Orchestrate gets new tools for agentic AI including workflows that are reusable and sequence multiple AI agents, Langflow integration, a catalog of prebuilt agents for procurement, HR, finance, supply chain and sales and prebuilt customer service agents.

Watsonx Orchestrate also includes agent observability, governance and production monitoring.

IBM delivered a new release of watsonx Assistant for Z to improve the mainframe user experience with a AI chatbot grounded on Z expertise.

 

Data to Decisions Next-Generation Customer Experience Tech Optimization IBM Chief Information Officer

Dell Technologies ups revenue outlook due to AI infrastructure

Dell Technologies ups revenue outlook due to AI infrastructure

Dell Technologies raised its annual revenue growth target to 7% to 9% as it ramps its AI infrastructure business.

The company outlined its guidance at its security analyst meeting. Dell maintained its third quarter guidance delivered when it reported second quarter earnings.

Dell said its updated annual revenue growth will be in the 7% to 9% range, up from 3% to 4%. The company also said it will raise its dividend 10% or more annually through 2030. Annual non-GAAP earnings growth will be 15% or better.

CEO Michael Dell said, "customers are hungry for AI and the compute, storage and networking we provide to deploy intelligence at scale." He added that the AI "opportunity ahead is massive."

The company argued that it is well positioned for AI data center infrastructure as well as PCs.

In a presentation, Dell laid out the following points:

  • AI is in the early stages of adoption and traditional data centers will be key to AI deployments.

  • AI inference will support growth going forward and enterprises will increasingly go with low-cost inference at the edge and disaggregated architectures. PCs will also play a role in edge AI.

  • Traditional server and storage growth will be driven by AI workloads.

Data to Decisions Tech Optimization dell Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

CoreWeave acquires Monolith, eyes industrial AI use cases

CoreWeave acquires Monolith, eyes industrial AI use cases

CoreWeave said it will acquire Monolith AI Limited, which specializes in AI for engineering in industrial and manufacturing companies.

Terms of the deal weren't disclosed, but the purchase is notable because CoreWeave is looking to take its broader AI infrastructure workloads vertical where the customer base is stickier.

Indeed, Monolith has a strong customer base including BMW, Mercedes-Benz, Honda, Nissan and Siemens to name a few. Monolith's platform is used for simulation and testing for physics and engineering solutions. CoreWeave has been rounding out its business with acquisitions including OpenPipe and Weights & Biases.

With CoreWeave, Monolith will get access to an AI stack for its customers. According to CoreWeave, the combined company will give customers the ability to speed up R&D, product development and design. CoreWeave Chief Strategy Officer Brian Venturo said AI can transform manufacturing.

"Monolith was founded to put AI directly into the hands of engineers, enabling them to create breakthrough technologies. Joining CoreWeave will allow us to scale that mission dramatically," said Dr. Richard Ahlfeld, CEO of Monolith, in a statement.

Holger Mueller, an analyst at Constellation Research, said:

"This is CoreWeave's first entry into the AI applications space, and it may highlight a potential future concern on workloads. The higher in-house, organic workloads are for CoreWeave relative to volatile AI provider workloads, the more stable CoreWeave's utilization of data center capacity."

Monolith's platform embeds AI and machine learning directly into engineering workloads and reduces the need for physical testing. Here are a few screenshots of Monolith’s platform.

Data to Decisions Future of Work Innovation & Product-led Growth Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

OpenAI sets SDKs for app integrations, agentic AI building blocks

OpenAI sets SDKs for app integrations, agentic AI building blocks

OpenAI launched App SDK, which connects applications directly to ChatGPT, AgentKit to speed up AI agent development, ChatKit to bring OpenAI to apps and websites, the general availability of Codex, and said ChatGPT-5 Pro and a smaller voice model is available in the API.

The company also said that it has 800 million weekly users, 4 million developers and 8 billion tokens processed per minute on OpenAI's API.

Speaking at OpenAI's Dev Day, CEO Sam Altman outlined the company's latest developer releases and placed the company in the middle of the software ecosystem. Demos highlighted integrations with Canva, Figma, Coursera and callouts for enterprise software vendors like HubSpot. The HubSpot mention eased concerns about OpenAI targeting SaaS directly.

Altman's keynote revolved around creating applications without code, enabling voice interfaces and using ChatGPT as an interface to third-party software vendors. Altman addressed developers directly:

"We're going to show you how we're making it possible to build apps inside of ChatGPT and how we can help you get a lot of distribution. We're going to show you how building agents is going to be much faster and better. You'll see how we're making it easier to write software. We think this is the best time in history to be a builder. It has never been faster to go from idea to product."

Key points from OpenAI's Dev Day announcements:

  • Apps SDK is in preview and Model Context Protocol (MCP) is the connective tissue between ChatGPT and applications. "With the Apps SDK, you get the full stack. You can connect your data trigger actions, render a fully interactive UI and more," said Altman. "You get full control over your back end logic and front end UI. We've published the standard so that anyone can integrate the Apps SDK. Your apps can reach hundreds of millions of ChatGPT users."

  • OpenAI will highlight partner apps built via Apps SDK. "We'll also release a directory that users can browse in addition to discovery and conversation," said Altman. For now, the launch applications for the Apps SDK are decidedly consumer.
  • The company is looking to make it easier to create AI agents. "It's hard to know where to start, what frameworks to use, and there's a lot of work. There's orchestration, eval loops, connecting tools, building a good UI, and each of these layers adds a lot of complexity before you know what's really going to work," said Altman.
  • Agent Kit will offer a set of building blocks to build, deploy and optimize agentic workflows. OpenAI said it is targeting individual developers as well as enterprise. Altman highlighted Agent Builder, a canvas to build agents, design logic steps and test workflows. Agent Builder is built on top of OpenAI's Responses API. Chat Kit will bring an embeddable ChatGPT interface. Agent Kit will also include OpenAI Connector Registry, which will provide trusted access to data and agents. There's also a control panel for administrators.
  • "Almost all new code written at OpenAI today is written by Codex users," said Altman. He said more features will be announced shortly and cited a big Codex deployment at Cisco.
  • ChatGPT-5 Pro will address use cases in industries such as finance, legal and healthcare. The smaller voice model is designed to address voice applications at a lower cost.
Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity openai ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

HOT TAKE: Magnify.io Adds New Agentic Tools to Optimize Post-Sale Revenue Growth

HOT TAKE: Magnify.io Adds New Agentic Tools to Optimize Post-Sale Revenue Growth

Post-sales is a critical phase of the customer lifecycle in terms of offering up the most opportunities for expansion revenue and optimizing customer lifetime value. But these signals often get lost in silos or simply ignored, leading to lost revenue opportunity. To address this issue, Magnify, which bills itself as a customer growth optimization platform focused on post-sales optimization, has launched its new AI Assistant, an agent purpose-built to optimize key elements of post-sales revenue generation. According to Magnify, the new tool is designed as a proactive, intelligent agent designed to find and execute growth opportunities automatically across the customer lifecycle.

A pervasive issue in growth optimization is the continued silos of data that typically prevent the ability to identify both positive and negative customer signals. And most AI tools are still bolted on to a single system like a CRM or customer success platform, delivering limited insights. And AI analytics can often deliver complex analysis, but typically without a direct path to action. Magnify CEO Josh Crossman acknowledged this industry gap, stating, “Our industry has talked about AI for years, but it’s not delivered revenue growth and real cost-savings. With Magnify’s AI Assistant, we’re delivering something entirely different; an agent that doesn’t just analyze data, it acts on it.”

The Assistant monitors granular customer signals, autonomously forecasting outcomes, and can execute the right plays across every digital touchpoint. This agent combines GPT-powered personalization with seamless multi-platform orchestration to identify and take action on growth opportunities. Crossman likens it to “adding an analyst, data scientist, and growth marketer to the team who never sleeps, scales infinitely, and stays focused on outcomes that matter: retention, expansion, and growth.”

The Magnify AI Assistant achieves this by orchestrating three core, interlocking capabilities:

Autonomous Forecasting: Predicts churn, conversion, and expansion opportunities quarters in advance, updating in real time as customer behavior shifts. Provides drivers of churn and expansion.  See recommendations on next best steps for accounts and users. 

Universal AI Research:  Use AI to search in seconds across product, marketing, sales, and CS systems to find hidden insights like disengaged users or at-risk accounts. Ask any question to analyze entire segments, user cohorts, or individual accounts.  See insights from 

Unlock Productivity Gains with AI Automation: Trigger actions in any connected system via Magnify, automatically running multi-platform motions across all your systems. Create campaigns in minutes using AI or one-off actions for users. Automate email, in-app, messages, support, tasks, and more to engage all your users and accounts. Get rid of tedious, repetitive tasks, unlocking massive productivity gains for post-sales teams .  All personalized with GPT-quality messaging.

The Magnify agentic approach seeks to eradicate tedious, repetitive tasks, allowing post-sales organizations to focus more on revenue and expansion opportunities. By automating engagement with all users and accounts, human talent is freed up to focus on high-touch, strategic engagement, accelerating measurable post-sales revenue growth without the need to hire more staff.

For growth leaders looking to optimize post-sales growth, agentic tools can speed the reinvention of GTM motions. However, a lot of AI agents offered from CRM and customer success tools are focused on only the data inside their respective systems. And, they can increase the cost of ownership of core systems significantly. Tools like Magnify can be a strong “easy button” to pulling together data from disparate GTM apps, adding agentic flows to take action across sales, customer success and other revenue stakeholders. In this new age of AI, growth leaders need to identify the fastest paths to success, mitigating risk while improving outcomes. 

 

Next-Generation Customer Experience Revenue & Growth Effectiveness Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Customer Officer Chief Revenue Officer Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Verizon names Dan Schulman CEO

Verizon names Dan Schulman CEO

Verizon named Dan Schulman CEO effective immediately. Schulman, former CEO of PayPal, is a Verizon board member and a telecommunications veteran.

Schulman replaces Hans Vestberg, who will stay with the company in an advisory role for a year. Verizon also named Mark Bertolini chairman of the board.

The leadership change comes as Verizon is transforming via the acquisition of Frontier Communications. Vestberg led Verizon through the 5G transition.

Schulman brings an interesting mix of experience to Verizon. He was CEO of PayPal, but also had leadership roles at AT&T, Priceline, Virgin Mobile and American Express.

In a statement, Schulman said:

"Verizon is at a critical juncture. We have a clear opportunity to redefine our trajectory, by growing our market share across all segments of the market, while delivering meaningful growth in our key financial metrics."

Verizon reaffirmed its outlook for 2025.

Here's a look at Schulman's experience.

New C-Suite Chief Executive Officer

OpenAI, AMD ink big GPU deal: What it means for the rest of us

OpenAI, AMD ink big GPU deal: What it means for the rest of us

OpenAI just made AMD a viable counterweight to Nvidia for GPUs. OpenAI said it has inked a 6 gigawatt deal with AMD to provide AMD Instinct GPUs for its AI buildout.

Under the terms of the deal, OpenAI's first gigawatt deployment of AMD Instinct MI450 GPUs starts in the second half of 2026. OpenAI will also get a warrant to acquire up to 160 million shares of AMD that will vest as milestones are reached. The first tranche vests when the first gigawatt is deployed with additional vesting as OpenAI builds to 6 gigawatts.

OpenAI also said the there is more vesting tied to AMD stock price targets and the ability to deploy Instinct GPUs at scale.

The OpenAI-AMD deal is different than the OpenAI-Nvidia deal. Nvidia invested in OpenAI to help fund purchases of GPUs and AI infrastructure. The OpenAI-AMD deal doesn't provide cash up front, but aligns interest.

AMD CEO Dr. Lisa Su said the partnership will create "a true win-win enabling the world’s most ambitious AI buildout and advancing the entire AI ecosystem." OpenAI CEO Sam Altman said the AMD deal gives it the ability to accelerate its plans.

Constellation Research analyst Holger Mueller said:

"OpenAI is signing deals left and right. The AMD deal is different, as it is the first explicit and only inference deal, as well as equity deal the AI vendor has stuck. It's a big win for AMD that could not get to scale in the data center for AI yet, the big question is now which data center vendor will get the workload. The equity aspect is also interesting. AMD is giving up a lot here for an initial deal. It is also clear that the OpenAI leadership is scared from compute capacity challenges and wants to avoid them at all cost. The concern is that it's unclear how will OpenAI be able to pay for all the performance obligations. We'll worry about tomorrow when it's tomorrow."

Mueller isn't kidding about the questions about payment. OpenAI's deals of late via Goldman Sachs include:

That spending is against OpenAI's 2025 revenue target of $13 billion, according to The Information.

Clearly, the OpenAI deal is huge for AMD, which now has solidified itself as a viable second option for GPUs. The OpenAI-AMD deal is also big for anyone procuring AI compute.

Here's why:

  • Nvidia has had a lock on the AI infrastructure market and those nice profit margins are being funded by IT buyers. Enterprises have been waiting for two years to see Nvidia competition.
  • As more AI infrastructure is deployed on-premises and at the edge, AMD will be a natural option for enterprises.
  • AMD will land more tier-1 customers and cloud instances.
  • The constrained GPU market will be less constrained with AMD and cloud hyperscalers' custom chips adding competition.
  • AMD is likely to land more deals with AI-centric cloud providers.
  • AMD's ROCm platform will be more viable against the Nvidia software stack, which is where the lock-in will really occur.
  • Nvidia still has the installed base and dominance, but will arguably face its first real competition in the market.

 

Data to Decisions Tech Optimization Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

AI Forum Washington, DC 2025: Everything we learned

AI Forum Washington, DC 2025: Everything we learned

Constellation Research’s AI Forum in Washington DC featured 19 sessions, AI thought leaders and practitioners and a community drinking from a firehose.

Here’s a look at the takeaways.

AI as a 1960s-ish moonshot?

As you would expect at an AI Forum held in Washington DC, there was a good bit of talk about AI as a battle between the West and China and the need for more power and less regulation.

Key points from the sessions:

  • Data centers are becoming critical national infrastructure, with AI workloads expected to consume one-third of capacity. The global race for compute capacity requires 300 gigawatts in 4.5 years, with power demand doubling every 100 days. Countries need comprehensive digital transformation strategies to remain competitive.
  • AI is viewed as "a new industrial base for the United States" that will determine whether America maintains global leadership or surrenders it. Speakers emphasized that leadership in AI is "never permanent" and the pace has gone "supersonic," making this a defense perimeter issue rather than just an economic opportunity
  • The fragmented regulatory approach in the US contrasts with Europe's more restrictive AI Act, which is driving companies to relocate operations. The need for balanced policies that encourage innovation while providing appropriate guardrails remains a central challenge.
  • Multiple panelists agreed that AI was going to affect jobs. These panelists also agreed that no government has an answer for the job losses.

Pragmatic use cases

David Giambruno, CEO of Nucleaus, has seen his share of technology transformations. We covered Giambruno's approach to cutting IT costs last year.

To roll out real AI use cases, you'll have to speak to value first, said Giambruno, who also said you have to figure out how you're going to build and on what platform.

"How you build matters both in cost, speed to value, and how much glass you want to chew," he said. In other words, pick one platform and operating model and run.

Once that platform is picked give developers a safe place to experiment and see what's possible.

Mukund Gopalan, Global Chief Data Officer at Ingram Micro, said every use case for AI needs to have "a clear line of sight to the top line or bottom line."

Gopalan said every use case is different, but the guiding principle is that they need to save costs, drive revenue or save time.

Scott Gnau, Vice President of Data Platforms at Intersystems, cited a use cases that did all three with ambient listening AI that plugged into the workflow of electronic health records. "A physician could have a conversation, look a patient in the eye and have everything captured and get a list of recommendations from an AI agent," said Gnau. "This use case takes an existing process and makes it fully optimized yet human."

Use cases that turned up on a panel:

  • Data cleansing and finding out where sensitive data resides.
  • Data engineering.
  • Pull logic out of stored procedures.
  • Transforming legacy applications with AI.
  • Knowledge management applications.

Nilanjan Sengupta, SVP / Industry Market Director, Public Sector and Healthcare, Americas at Thoughtworks, said the software development lifecycle is a clear use case for AI agents. "The main trend we're seeing is legacy modernization across the entire enterprise," he said.

Anand Iyer, Chief AI Officer at Welldoc, said his company has created a large sensor model that takes sensor data can uses it to predict glucose values in the hours ahead. "When we think about where healthcare is headed, a lot of us are trying to get to the prevention piece," said Iyer.

Peter Danenberg, a senior software engineer at Google's DeepMind who leads rapid prototyping for Gemini, said enterprises have been expanding the use case roster. Danenberg said that there has been a shift in companies about how they are using foundational models from reluctance to adoption. Companies are focusing on low hanging fruit for use cases, but these add up. "Anything where you need to extract structured data from unstructured data is beautiful low hanging fruit you can get started with," he said.

Proofs of concepts are panned widely

If there was a punching bag at AI Forum Washington DC it was the proof of concept.

CxOs repeatedly panned POCs because they were an excuse for not doing the work upfront, sucking in funds and creating rabbit holes. "Before you get to the POC we often trip over ourselves with understanding our data," said one federal government AI leader. "Rather than doing POCs, do a discovery sprint for AI and it will quickly unveil where the holes in your data are."

Sunil Karkera, Founder Soul of the Machine, is leveraging agentic AI to outpace much larger companies. "We solve boring problems and it's exciting," said Karkera. He also doesn't believe in proofs of concepts and pilots. Prototypes can be created in that first customer meeting and can rapidly go to production. "We are using an entirely end to end AI toolchain," said Karkera. "Vibe coding is about 10% to 20% in the prototyping phase. Then it's basically deep architecture. Engineering AI is really hard because most of the work is context engineering and it's not straightforward."

Are chief AI officers a thing?

Take a room with a few chief AI officers and ask them whether there's staying power in their titles and you're likely to get some interesting answers.

The takeaways from a panel:

  • The chief AI officer role is needed now but will be structured into the organization.
  • CAIOs revolve around a centralized approach, but will go away once AI is decentralized across an org.
  • CIOs will need to work with CAIOs for the foreseeable future on frameworks, tools, platforms and governance.
  • Enterprises with CAIOs need to balance business acumen and technical proficiency.
  • It's not a vanity title...yet.

Build vs. Buy

When it came to building AI applications, CxOs at the AI Forum were split on build vs. buy. AD Al-Ghourabi, a senior technology leader, said the right answer is to build and buy. "Buy for parity and build for differentiation," he said. "A lot of AI capabilities and LLMs are now commodities, but anything between your data and a decision is your differentiator and core."

In recent years, buying from a big vendor offered more predictability over build and best of breed. AI has changed that equation. You can prototype, test, build and deliver in the time a large tech vendor goes through the procurement cycle.

Others argue that enterprises should buy and push their vendors to innovate. There's a huge gap between prototype and production.

Tracey Cesen, Founder & CEO of Forever Human.ai, said the problem with building is that "software is a living, breathing organism so it requires care and feeding." That care and feeding also means you continuously question whether it should have been built.

In the end, the buy vs. build debate boils down to flexibility. Don't get caught into an "ERP data prison," one delivery mode with cloud or an AI vendor. In the end, enterprise buyers need to take a portfolio approach and acquire components that enable you to be flexible and experiment. Also tier vendors based on approaches and business priorities.

Nicolai Wadstrom, Partner, Co-Head of Ares Venture Capital and AI Innovation Groups at Ares Management, said the buy vs. build debate really revolves around do both.

"Build vs buy is wrong because you need to build, partner and buy," said Wadstrom. "You want to buy things commoditized. You want to build things where you can pour in proprietary knowledge and build a moat. And when you need higher skills you partner. The complacent thinking of a traditional CIO approach where I buy someone else's technology roadmap is over. You're not going to be competitive. Understand the drivers and competitors, define your problem, opportunity landscape and drive a technology roadmap."

On-premises still matters

CxOs noted that the conversation around AI often includes an assumption of cloud computing. The reality is that on-premises may drive more returns as inference becomes the main AI workload.

"Don't discard on-premises. A lot of people are doing AI on-premises and focused on it," said one CxO.

Why you shouldn't discount on-premises AI:

  • It's more effective for small models.
  • Cloud costs can add up.
  • On-premises AI may make more sense in terms of operations, privacy and security.
  • Edge use cases are likely to become more of the AI landscape.
  • There's a continuum of AI deployment models.

Future of work

The future of work was hotly debated. Most CxOs agreed that corporations will use fewer employees and more AI agents and robots. The impact on education, income and society will be large.

A few moving parts to ponder:

  • New management structures will need to emerge to manage humans and digital workers.
  • Governments aren't addressing the AI impact to jobs but will need to shortly--like in the next 12 to 18 months.
  • Education will have to evolve and public institutions aren't prepared. Look for personalized education programs to emerge powered by AI.
  • Professional training will become more important than university education.
  • Some attendees noted that humans have always found new roles amid new technology trends.
  • If AI uplevels the workforce you’ll see two side effects: First, everyone will move to the median in terms of performance. Second, you’ll have a shortage of experts since few workers will actually have the 10,000 hours required to be an expert.

Looming questions

Between the conversations in between panels, there were a few looming questions to ponder. These are worth some rumination.

  • Have LLMs hit the wall? In a few talks, it was noted that LLMs have already ingested all of the human data available and public. Synthetic data leads to degradation over time as it becomes further removed from the original.
  • Are we in an AI bubble? We covered this one before, but the worries aren’t going anywhere. See: Enterprise AI: It's all about the proprietary data | Watercooler debate: Are we in an AI bubble?
  • Is our brute force compute approach misguided? In the US, the running assumption is that trillions of dollars, data centers the size of Manhattan and millions of GPUs are the way to get to AI nirvana. However, there has to be more elegant engineering and innovation out there. Are we mired in AI factory groupthink?

 

Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Millennial Samurai, AI Futures, and Why Culture Still Wins | DisrupTV Ep. 413

Millennial Samurai, AI Futures, and Why Culture Still Wins | DisrupTV Ep. 413

Millennial Samurai, AI Futures, and Why Culture Still Wins | DisrupTV Ep. 413

This week on DisrupTV, we caught up with visionary leaders shaping the future:

  • George J. Chanos, Author, Speaker, and former Attorney General of Nevada
  • Brian Vellmure, Executive, Builder, Advisor, Board Member, and Investor
  • Laura Hamill, PhD, author of The Power of Culture: An Economist Edge Book

In this episode of DisrupTV, we explore the forces shaping our future — from personal empowerment and the “Millennial Samurai” mindset, to AI’s disruptive impact on labor, energy, and business models, to the critical role of culture in organizations. Our guests share visionary perspectives on where humanity is headed, the choices leaders must make, and why culture remains a defining factor for success in the 21st century.

Key Takeaways

From the discussion, here are the top actionable insights:

  • Adapting to Rapid Change: George Chanos emphasizes embracing uncertainty, learning from failure, and finding opportunity in adversity to navigate a fast-changing world.
  • AI and Labor Markets: Brian Vellmure explores how AI will reshape labor dynamics, potentially creating winner-takes-all scenarios. Organizations need to allocate resources strategically to remain competitive.
  • Intentional Culture: Laura Hamill highlights the gap between stated and actual culture, urging organizations to explicitly define values, behaviors, and expectations to create alignment and autonomy.
  • Energy and Investment: The episode also touches on investing in energy and AI sectors to address the constraints of computing power and sustainable growth.
  • Personal Empowerment: Chanos shares lessons from his career, including arguing before the U.S. Supreme Court, and emphasizes emotional intelligence and unity as critical to overcoming existential threats.
  • Future-Focused Strategies: Guests discuss tokenization, hybrid work, and the evolving enterprise software landscape, highlighting the need for adaptability and deliberate strategy.

Final Thoughts

The episode underscores that organizational culture, AI adaptation, and personal empowerment are inseparable pillars of success in the modern enterprise. Leaders must intentionally define cultural expectations, anticipate AI’s impact on labor and markets, and cultivate emotional intelligence to drive sustainable outcomes.

By embracing the Millennial Samurai mindset—strategic, adaptable, and values-driven—individuals and organizations can not only survive but thrive in a rapidly evolving technological landscape.

Related Episodes

If you found Episode 413 valuable, here are a few others that align in theme or extend similar conversations:

 

Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Tech Optimization Chief Analytics Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Information Officer Chief Information Security Officer Chief Technology Officer On DisrupTV <iframe width="560" height="315" src="https://www.youtube.com/embed/xI5mXyPyBLE?si=Ua_EDF4UdozEl1KX" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

OpenAI's SaaSageddon fears need perspective

OpenAI's SaaSageddon fears need perspective

OpenAI is eyeing software as a service as it aims to build applications around ChatGPT and has to be a software-as-a-service disruptor to justify at least some part of its valuation. Enterprises should take note of OpenAI's potential role, but keep perspective.

In a blog post on Monday, OpenAI's Giancarlo Lionetti outlined how OpenAI is running on OpenAI. Anyone familiar with enterprise software knows this vendor-running-on-itself marketing pitch. The general theme is that a software vendor is its own first customer. Salesforce will highlight how Agentforce is running the company. ServiceNow has been doing ServiceNow on ServiceNow for years. Pick a vendor and there's some version of the you are your first customer narrative going on.

So OpenAI's move to look a bit more SaaS-y isn't surprising. The LLM players have been working on apps to surround quickly commoditizing foundational models for most of 2025. What's changed is Wall Street has noticed and took DocuSign and HubSpot out to the woodshed this week.

Here's what OpenAI's Lionetti outlined in a post:

  • OpenAI has been able to move from pilot to production and feels your pain. "While our models improve in speed, cost, and capability, adoption rarely moves in a straight line. Deployments often outpace the change needed for organizations to leverage this technology," he said.
  • "Our GTM, product, and engineering teams study their everyday workflows, define what good looks like and deliver changes in weeks instead of quarters. We decided to focus on a few high-leverage systems with outsized impact," said Lionetti.
  • OpenAI highlighted GTM Assistant, DocuGPT, Research Assistant, Support Agent and Inbound Sales Assistant. These were viewed as Salesforce, Box, HubSpot, DocuSign and possibly ServiceNow killers. These tools aren't that different than what has been discussed in enterprise technology for months. In fact, many SaaS vendors have these ChatGPT-ish features already.
  • Enterprises will hear more about OpenAI's SaaS adventures on Oct. 6 and its developer day.

What was more notable in OpenAI's SaaS ambitions is that it has been gaining compliance certifications that enterprises will actually care about. OpenAI isn't competing with SaaS vendors for LLM interfaces as much as it is for compliant workflows and trust.

Some perspective

Remember history. Not that long ago in the 1990s and early 2000s, some big whale entered a market and it was widely assumed it would be successful. Who remembers SAP talking about chasing smaller enterprises? How about Microsoft and mobile?

Microsoft may be the best example. Every time Microsoft entered a market there was a storyline that the software giant was going to kill some vendor. The reality is that those smaller vendors survived and thrived most of the time.

Microsoft didn't kill Google and Android, Apple, MacOS and iOS, Amazon Web Services, Sony, Adobe, Oracle, Salesforce, SAP, Linux, Zoom, VMware or even IBM. I could go on but you get the idea.

OpenAI is an enterprise whale in valuation only. The SaaS vendors it will allegedly killing have more than just a fancy enterprise search tool. Many SaaS vendors serve as de facto workflow engines.

Let's roll a few slides:

DocuSign manages the agreement lifecycle. See: Docusign launches AI contract agents

Box is on the content lifecycle. See: Box launches Box Extract, Box Automate, Box Shield Pro

HubSpot is disruptive in its own right as it moves upmarket. See: HubSpot’s strategy: Use AI to deliver work, not software

It's quite possible you'll rip out your SaaS vendor for OpenAI, but not sure why you'd lock in without a few more layers on the platform slide. You'd be better off looking at a disruptive force like Soul of the Machine that'll build you something or ensuring that you're model agnostic. OpenAI could be an ingredient brand for SaaS vendors, but it's just one ingredient of a broader multi-model mix. 

The debate isn't about why you'd leave one vendor to lock in with OpenAI. The debate is why you couldn't just build what OpenAI has done internally with a cheaper model.

Your play

Whether you believe OpenAI is your SaaS savior is up to you. But one thing is clear: You can use it.

OpenAI has given you an option for negotiations. OpenAI's timing with its SaaS play is pretty good. Why? Customers are beyond annoyed with their SaaS vendors. We hear it from CxOs all the time.

R "Ray" Wang, CEO of Constellation Research, frequently notes how CxOs tell him all the time that the two most inflationary things enterprises see are healthcare costs and their SaaS bill.

Here's how you can use this OpenAI kerfuffle to your advantage.

  • Threaten to "explore" using OpenAI as a layer to your enterprise operations. You can always use a second supplier.
  • Actually explore what OpenAI outlines in its operations and build it yourself with Anthropic, Cohere or any other LLM provider.
  • Evaluate your SaaS strategy as if you had a clean slate. Would you really bet on these SaaS silos--i.e. wannabe platforms--today?
  • Map your platform strategy. A member of Constellation Research's BT150 recently said that when you have a green field a vendor like ServiceNow is the choice over a series of SaaS vendors.
  • Evaluate your hyperscaler. In the end, the real impact of LLMs is going to be as a user interface and vehicle to access your data, processes and workflows quickly. AWS, Google Cloud and Microsoft Azure are all in the mix for building agents and models that traverse enterprise apps.

In the end, OpenAI hasn't even earned the benefit of delivering FUD because it hasn't done anything yet. Nevertheless, OpenAI just gave you a bit of leverage. Use it.

Data to Decisions Future of Work Chief Information Officer