Results

OpenAI’s 2026 focus on practical AI points to enterprise

OpenAI’s 2026 focus on practical AI points to enterprise

OpenAI has $20 billion in annual recurring revenue, but perhaps the bigger news is that the company's 2026 focus revolves around "practical adoption" in health, science and enterprise.

Sarah Friar, OpenAI's CFO, said in a blog post outlining a bit of the company's financial picture and infrastructure commitments.

"Our focus for 2026: practical adoption. The priority is closing the gap between what AI now makes possible and how people, companies, and countries are using it day to day. The opportunity is large and immediate, especially in health, science, and enterprise, where better intelligence translates directly into better outcomes."

Friar's missive is worth noting for the enterprise, which primarily sees Anthropic as the practical enterprise AI adoption play. Perhaps, there's a larger takeaway for software as a service vendors. The theory on Wall Street is that SaaS will be disrupted by AI agents and some of these vendors won't make the cut. For now, investors are voting with their dollars as enterprise software stocks are taking a hit early in 2026.

OpenAI can become "an operating layer for knowledge work," said Friar.

This disruption will take time to play out, but it's clear OpenAI sees itself as a leader going forward with a focus on AI agents and workflow automation to manage projects, coordinate plans and execute tasks.

Friar said AI's biggest constraint is infrastructure and OpenAI isn't shy about multi-year commitments. Friar's argument is that revenue goes along with computing power. The more compute, OpenAI has the faster it will grow. OpenAI's Friar noted that there will be new business models ahead.

"The business model closes the loop. We began with subscriptions. Today we operate a multi-tier system that includes consumer and team subscriptions, a free ad- and commerce-supported tier that drives broad adoption, and usage-based APIs tied to production workloads. Where this goes next will extend beyond what we already sell. As intelligence moves into scientific research, drug discovery, energy systems, and financial modeling, new economic models will emerge. Licensing, IP-based agreements, and outcome-based pricing will share in the value created. That is how the internet evolved. Intelligence will follow the same path."

Friar makes an interesting point, but outcome- and value-based pricing usually appears to vendors. Customers? Not so much.

 

OpenAI's post was making the case that the company's spending on infrastructure is disciplined with capital committed in tranches against "real demand signals."

According to Friar, OpenAI's business model will catch up to infrastructure with multiple revenue streams including commerce, ads, subscriptions and APIs.

Friar said compute grew 3X year over year from 0.2 GW in 2023 to 0.6 GW in 2024 to about 1.9 GW in 2025. Revenue in that time grew at the same clip with $2 billion in ARR in 2023, $6 billion in 2024 and more than $20 billion in 2025.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity openai ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

DisrupTV Special Edition at Davos 2026: AI, Geopolitics, and the Race to Build Trust at Global Scale

DisrupTV Special Edition at Davos 2026: AI, Geopolitics, and the Race to Build Trust at Global Scale

DisrupTV Special Edition at Davos 2026: AI, Geopolitics, and the Race to Build Trust at Global Scale

Broadcast live from Davos during the World Economic Forum 2026, this special edition of DisrupTV brought together global leaders, technologists, and strategists to unpack one urgent question: How do we lead, govern, and collaborate in an AI-driven world defined by geopolitical uncertainty?

Co-hosted by Vala Afshar, Chief Digital Evangelist at Salesforce, and R “Ray” Wang, CEO and Founder of Constellation Research, the episode explored AI’s expanding role across healthcare, enterprise operations, national competitiveness, and global cooperation. Guests included Mark Minivich, Christian Lindmark, Dr. Travis Oliphant, Sandy Carter, and Jim Harris, each offering a distinct perspective on how AI is reshaping outcomes—and expectations—across industries.

Davos 2026: From Dialogue to Action

R "Ray" Wang opened the session by framing Davos 2026 around a clear mandate: fostering cooperation, broadening perspectives, and solving shared global challenges. Unlike previous years, this Davos carried a heightened sense of urgency—driven by geopolitical tensions, economic realignment, and the accelerating impact of artificial intelligence on GDP, labor, and national security.

Vala Afshar highlighted that while AI dominated conversations across Davos, the real differentiator was how leaders talked about trust, ethics, and implementation, not just innovation.

Geopolitics, Resilience, and AI as Economic Infrastructure

Mark Minivich, President of Going Global Ventures, emphasized that AI is no longer a future differentiator—it is fast becoming economic infrastructure. Nations that fail to integrate AI responsibly risk falling behind in productivity, resilience, and competitiveness.

Minivich noted that compared to prior years, Davos 2026 reflected sharper geopolitical realities:

  • Rising fragmentation between global power blocs

  • Increased focus on resilience over efficiency

  • The urgent need for public-private collaboration on reskilling and workforce readiness

As AI reshapes industries, governments and enterprises alike must rethink how they prepare workers—not just for new jobs, but for continuous adaptation.

AI in Healthcare: From Experimentation to Clinical Impact

Healthcare emerged as one of the most compelling domains for applied AI.

Christian Lindmark, CTO at Stanford Healthcare and Stanford School of Medicine, shared how Stanford is integrating AI directly into clinical workflows—not as experimental tools, but as decision-support systems embedded in daily practice. Stanford’s AI-powered chat and EHR platforms are helping clinicians:

  • Improve diagnostic decision-making

  • Reduce cognitive load

  • Deliver faster, more consistent patient care

Crucially, Lindmark stressed that AI succeeds in healthcare only when it augments human judgment—not replaces it.

Trust, Open Source, and Data Sovereignty in the Age of AI

Dr. Travis Oliphant, founder and chief architect of major open-source AI frameworks, underscored that trust is the currency of AI adoption. As organizations deploy AI at scale, questions of data sovereignty, transparency, and governance become existential.

Oliphant argued that open-source communities play a critical role in:

  • Enabling sovereign AI systems

  • Preventing over-reliance on opaque black-box models

  • Allowing nations and enterprises to maintain control over data and outcomes

Without trust, even the most powerful AI systems will face resistance—from regulators, workers, and citizens alike.

Ethical AI and the Reality Check Leaders Need

Sandy Carter, bestselling author of AI First and Unstoppable, brought a sobering reality check to the conversation: only 15% of the world’s information has been digitized.

This gap has major implications. Leaders often overestimate AI’s completeness while underestimating bias, data gaps, and ethical risks. Carter emphasized that ethical AI requires:

  • A clear understanding of AI’s limitations

  • Intentional governance and guardrails

  • Diverse voices shaping AI systems

At Davos, she also spotlighted the importance of inclusive leadership, hosting sessions on ethical AI and the “Unstoppable Women of Web3 and AI.”

Reinventing Business Processes with AI

Closing the episode, Jim Harris shared a powerful enterprise example from Ernst & Young, where AI transformed a 46-hour risk management process into a 15-minute workflow for new users.

The key lesson? AI value doesn’t come from automating broken processes—it comes from reimagining them entirely. Harris advocated for a “blank first” mindset:

  • Start with outcomes, not existing workflows

  • Design AI-native processes

  • Measure impact in speed, accuracy, and decision quality

Key Takeaways

  • AI is now geopolitical infrastructure, influencing GDP, competitiveness, and national resilience

  • Trust, transparency, and data sovereignty are foundational to AI adoption

  • Healthcare AI is moving from pilots to production, with real clinical impact

  • Ethical AI requires humility, especially given how little data is truly digitized

  • The biggest AI wins come from reinventing processes, not automating the past

Final Thoughts: Leadership in an Era of Converging Uncertainty

The DisrupTV Davos 2026 Special Edition made one thing clear: the future of AI will be shaped less by algorithms—and more by leadership choices.

As R "Ray" Wang noted, this is not a moment for incremental thinking. Leaders must balance innovation with responsibility, speed with trust, and ambition with cooperation. In a world where AI, geopolitics, and human systems converge, those who lead with intention will define what comes next.

Stay tuned for more DisrupTV insights from the world’s most influential conversations—where technology, leadership, and humanity intersect.

Related Episodes

If you found Special Edition Episode valuable, here are a few others that align in theme or extend similar conversations:

 

Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Future of Work Chief Executive Officer Chief Information Security Officer Chief Technology Officer Chief AI Officer Chief Information Officer Chief Data Officer

From healthcare to geopolitics, DisrupTV goes live from Davos to explore how AI, trust, and cooperation will shape the global economy.

On DisrupTV <iframe width="560" height="315" src="https://www.youtube.com/embed/J5CupyHoVng?si=GHB_W8FIfV0UhJBW" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

From Boardroom to Operations: Building Human-Machine Partnerships That Balance Speed with Wisdom

From Boardroom to Operations: Building Human-Machine Partnerships That Balance Speed with Wisdom

Media Name: pexels-olliecraig1-30131130.jpg
1

In Part 1 of this 2026 Boardroom Decision series, I highlighted why corporate Boards must expand their fiduciary duty to encompass gray zone threats, hardware-level vulnerabilities, and the strategic asymmetries created by fragmented AI policies. I argued that traditional risk management frameworks are insufficient when facing adaptive, intelligent adversaries in rapidly changing environments. The question I left you with was this: how do we build resilient organizations that can thrive in a contested world while preserving human agency and ethical judgment?

The answer lies in operationalizing what I call decision elasticity through AI-augmented defense and human-machine partnerships. This is not about choosing between human judgment and machine speed. It is about architecting systems where both work together, each amplifying the other's strengths while compensating for weaknesses.

 

Why Cybersecurity and AI Budgets Must Rise Together

In a recent DisrupTV episode with friends Ray Wang and Vala Afshar, I joined Andre Pienaar, CEO of C5 Capital, to discuss leadership in the age of AI-driven cyber threats. Andre opened with a stark reality that every Board must internalize namely that cyberattacks are becoming more sophisticated, faster, and increasingly AI-driven. Threat actors no longer operate with manual tools. They are deploying automation, machine learning, and increasingly autonomous systems to exploit vulnerabilities at scale.

For Boards and executives, this means a fundamental shift in investment strategy. You cannot increase AI adoption without simultaneously increasing cybersecurity investment. Andre emphasized that AI expands the attack surface just as much as AI enhances productivity. Organizations deploying AI without upgrading security architectures are effectively widening the door for adversaries.

This connects directly to the gray zone threats I discussed in Part 1 of this series. Specifically, when nation-state actors spend 15 years systematically positioning themselves in your supply chain, they are not just waiting for you to deploy technological upgrades. They are counting on it.

Every new AI system, every cloud migration, every edge computing deployment creates new entry points if security is not architected from the ground up.

 

AI-Augmented Defense: Humans and Machines, Together

During the DisrupTV conversation, I reinforced a principle that has guided my work from the CDC to the U.S. Intelligence Community to the FCC to my current advisory roles: AI alone is not the solution, but neither are humans operating without it. Cybersecurity success depends on augmented intelligence, where AI detects patterns of life and anomalies at machine speed while humans provide the essential context, judgment, and ethical oversight.

I highlighted a sobering trend: ransom demands are increasing sharply, and AI-enabled attacks are lowering the cost and effort for bad actors. Defenders must respond with equal sophistication. The future of cybersecurity is not humans versus machines. The future is humans with machines.

This humans-with-machines partnership reflects what I call deployment empathy, the recognition that technological transformation is fundamentally about people. Leaders must create environments where teams feel psychologically safe to experiment, learn, and adapt alongside AI systems rather than feeling threatened by them.

Let me be concrete about what this looks like operationally. In my work advising organizations on AI adoption, I see three common failure modes that Boards must help their executive teams avoid.

Failure Mode 1: Treating AI as a Black Box. When security teams deploy AI-driven threat detection but cannot explain how the system reached its conclusions, they create two problems. First, they cannot improve the system because they do not understand its reasoning. Second, they cannot defend their decisions to regulators, customers, or juries when things go wrong. Boards must insist on explainable AI in security-critical applications.

Failure Mode 2: Over-Automating Decision-Making. Some organizations, in their enthusiasm for AI efficiency, automate responses to detected threats without human oversight. This creates catastrophic risks when AI systems misidentify legitimate activity as malicious or when adversaries learn to game the automated responses. Decision elasticity requires keeping humans in the loop for consequential decisions, even if AI provides the initial alert and analysis.

Failure Mode 3: Under-Investing in Human Capability. The most sophisticated AI security tools are useless if your team does not have the skills to interpret their outputs, tune their parameters, or integrate their insights into broader strategic context. Boards must ensure that AI investments are matched by investments in human capability development, not just technical training but also the critical thinking and ethical reasoning skills that machines cannot replicate.

 

Quantum Computing, Geopolitics, and Shortening Supply Chains

The DisrupTV discussion also explored the geopolitical implications of AI and quantum computing. Andre and I both stressed that quantum breakthroughs will eventually render today's encryption obsolete, making post-quantum cryptography a near-term planning requirement, not a distant concern.

This is where the hardware vulnerabilities I discussed in Part 1 of this 2026 Boardroom Decision Series become even more critical. If your supply chain is already compromised at the chip level, the transition to post-quantum cryptography will not save you. Adversaries with hardware-level access can simply intercept data before it is encrypted or after it is decrypted, rendering even quantum-resistant algorithms useless.

My recommendation to Boards is clear: shorten your supply chains to improve cybersecurity. This is not just about reshoring manufacturing, though that may be part of the solution. It is about reducing the number of handoffs, intermediaries, and black-box components between your organization and the foundational hardware and software you depend on. Every link in the supply chain is a potential compromise point. The shorter and more transparent the chain, the easier it is to verify integrity.

At the same time, AI policy and regulations are fragmenting globally.

I posited during the DisrupTV conversation that leaders must understand which geopolitical "technology matrix" they are operating within. To maintain resilience, organizations must be able to pivot as that matrix shifts.

 

Key Strategic Takeaways for Operational Leadership

Building on the Board-level governance principles from Part 1, here are the operational imperatives that CEOs, CISOs, and General Counsel must execute:

Harmonize Governance Internally Before External Mandates Force Your Hand. Do not wait for federal AI policy to resolve the 50-state fragmentation I discussed in Part 1. Establish internal AI governance frameworks now that can adapt to multiple regulatory regimes without requiring complete system redesigns. This means building modularity and flexibility into your AI architecture from the start.

Compartmentalize Experimentation While Maintaining Oversight. Create sandboxes where teams can experiment with AI capabilities without putting production systems or sensitive data at risk. But ensure these sandboxes have clear governance, defined success criteria, and pathways to production that include security reviews and ethical assessments.

Prioritize Pivotability Over Optimization. In a world of rapid technological change and geopolitical uncertainty, the ability to change direction quickly is more valuable than squeezing the last percentage point of efficiency from current systems. This is what I call maximizing pivotability. Avoid decision anchoring, where you double down on a technology or vendor relationship even when market signals suggest a need for change.

Embrace Different AI Flavors with Different Governance. Understand that computer vision AI operates deterministically while Generative AI is creative but unpredictable. Each requires different governance approaches and risk frameworks. Boards and executives must resist the temptation to apply one-size-fits-all policies to fundamentally different technologies.

Lead with Empathy and Courage Through Uncertainty. We need leaders who are not just technically literate, but also leaderships who lead with empathy and courage through unprecedented uncertainty. This means being honest about what you do not know, creating space for your teams to voice concerns and propose alternatives, and making decisions even when perfect information is unavailable.

 

The Operational Reality of Decision Elasticity

Decision elasticity is not just a conceptual framework. It is an operational discipline that requires specific organizational capabilities.

First, you need real-time situational awareness. This means AI systems that continuously monitor your environment, detect anomalies, and surface potential threats or opportunities before they become crises. But situational awareness alone is not enough.

Second, you need rapid sense-making capabilities. When AI surfaces an anomaly, your team must be able to quickly assess whether it represents a genuine threat, a false positive, or an emerging opportunity. This requires cross-functional collaboration, access to diverse expertise, and the ability to synthesize information from multiple sources.

Third, you need pre-authorized response options. In a crisis, you cannot afford to wait for Board approval or executive consensus on every decision. But you also cannot give mid-level managers carte blanche to make consequential choices. Decision elasticity means defining in advance what types of responses can be executed immediately, what requires escalation, and what triggers a full crisis response.

Throughout these three capabilities, you need continuous learning and adaptation. After every incident, whether it is a security breach, a compliance failure, or a missed opportunity, your organization must conduct rigorous after-action reviews that feed insights back into your AI systems, your processes, and your training programs.

 

Scaling Resilience Through Ecosystem Partnerships

The governance principles from Part 1 of this essay series, and the operational capabilities I have outlined here, collectively create the foundation for organizational resilience. No company can achieve resilience in isolation. In Part 3 of this series, I will explore how Boards can scale resilience through strategic ecosystem partnerships while maintaining the pivotability needed to adapt as the geopolitical and technological landscape shifts.

The key insight I will develop is this: scaling through partnerships is not just about growth. It is about building a resilient infrastructure that can pivot every three to six months as tech tectonics shift.

This requires moving away from top-down leadership and toward a decentralized, networked model that leverages collective intelligence while avoiding the decision anchoring that causes organizations to double down on failing strategies.

 

An Invitation to Operational Excellence

If your organization is struggling to balance AI adoption with cybersecurity, if your teams feel overwhelmed by the pace of change, or if you recognize that your current operational model is not built for the converged threats I have described, I invite you to engage in a deeper conversation about building human-machine partnerships that work.

My advisory work focuses on helping organizations develop the operational discipline and cultural foundations needed for decision elasticity. This is not about implementing a specific technology or following a compliance checklist. It is about building the organizational muscle memory to respond to ambiguous threats with speed and wisdom.

The stakes are clear. AI strategy is now inseparable from national security and economic competitiveness.

For operational leaders, the question is whether you will build the capabilities to compete in this environment or watch as more agile competitors and adversaries outmaneuver you. Given that fortune favors the brave, I strongly recommend a proactive leadership approach.

 

Dr. David Bray is both Chair of the Accelerator and a Distinguished Fellow at the non-partisan Stimson Center as well as Principal and CEO at LeadDoAdapt Ventures, Inc. He previously served as a non-partisan Senior National Intelligence Service Executive, as Chief Information Officer of the Federal Communications Commission, and IT Chief for the Bioterrorism Preparedness and Response Program. Business Insider named him one of the top “24 Americans Changing the World” and he has received both the  Joint Civilian Service Commendation Award and the National Intelligence Exceptional Achievement Medal. The U.S. Congress invited him to serve as an expert witness on AI in September 2025. He also advises corporate Boards and CEOs on navigating the convergence of AI, cybersecurity, and geopolitical risk.

Board Strategy Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Innovation & Product-led Growth New C-Suite Tech Optimization Next-Generation Customer Experience Leadership Security Zero Trust ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Executive Officer Chief Financial Officer Chief People Officer Chief Information Officer Chief Marketing Officer Chief Information Security Officer Chief Experience Officer Chief Privacy Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Product Officer

OpenAI launches ChatGPT Go, ad tests

OpenAI launches ChatGPT Go, ad tests

OpenAI launched ChatGPT Go in the US, an $8 per month subscription plan, and plans to launch ads in that plan and free versions.

The two efforts will give a boost to OpenAI's revenue and overall financial picture. After all, OpenAI has billions of dollars in AI infrastructure spending ahead with the latest being a deal with Cerebras.

In a blog post, OpenAI said ChatGPT Go launched in India in August and has rolled out to more than 170 countries. The US entry now gives OpenAI three tiers for its subscription plans.

  • ChatGPT Go is $8 per month.
  • ChatGPT Plus is $20 per month.
  • ChatGPT Pro is $200 per month.
  • ChatGPT Business is $25 per month for smaller companies. ChatGPT Enterprise pricing varies.

What remains to be seen is whether ChatGPT Plus subscribers trade down to ChatGPT Go, which carries ads. My working theory is that AI services may resemble streaming where customers realize ads aren't so bad if you can save money. ChatGPT Go has more messages, uploads and image creation tools than the free tier, but less than ChatGPT Plus. For many folks, that access may be fine. ChatGPT Plus will include GPT 5.2 Thinking, the ability to use legacy models and Codex. ChatGPT Plus can also retain details from past conversations longer.

More importantly to OpenAI's financial picture is its plans to roll out ads on the free tier and ChatGPT Go tiers.

OpenAI laid out its ad principles that include answer independence from advertising, conversation privacy and choice and control over data. Tests will roll out on ChatGPT and ChatGPT Go in the weeks ahead.

Here's a look at the test formats.

 

 

Data to Decisions Next-Generation Customer Experience Chief Information Officer

Disrupt Yourself: Personal Growth, Leadership, and Designing Work That Thrives | DisrupTV Ep. 424

Disrupt Yourself: Personal Growth, Leadership, and Designing Work That Thrives | DisrupTV Ep. 424

Introduction: The Evolution from HR to Employee Experience

In the latest episode of DisrupTV, co-hosts Vala Afshar, Chief Digital Evangelist at Salesforce, and R “Ray” Wang, CEO and Founder of Constellation Research, led a timely discussion on how leadership, culture, and work itself are being redefined in the Age of AI.

The conversation explored a fundamental shift underway in organizations worldwide: the move from traditional Human Resources to employee experience (EX) design. As AI accelerates change, leaders must be far more intentional about how people experience work, growth, and belonging.

Joining the show were Whitney Johnson, CEO of Disruption Advisors, and Dean Carter and Mark Levy, former executives at Airbnb and Patagonia and co-authors of a new book on employee experience design.

Whitney Johnson: Personal Disruption and the S-Curve of Learning

Whitney Johnson reframed disruption not as a technology phenomenon, but as a human one.

She emphasized that real transformation happens when individuals are willing to disrupt themselves—learning new skills, adopting new identities, and stepping into uncertainty. Central to her thinking is the S-curve of learning, which maps growth across three stages:

  • Launch Point: Uncertainty, fear, and steep learning
  • Sweet Spot: Momentum, confidence, and rapid progress
  • Mastery: Competence, comfort—and the risk of stagnation

Johnson stressed that leaders play a critical role in helping employees navigate these transitions by creating psychological safety, encouraging experimentation, and responding constructively to new ideas. Growth, she noted, is humanity’s default setting—fear is simply the signal that learning is happening.

Designing for Agency, Not Compliance

A recurring theme throughout the episode was agency—the ability for employees to act with ownership, creativity, and purpose.

Johnson highlighted that resistance to change often stems from fear, not capability. Leaders who acknowledge this and design experiences that support small, behavior-based wins can help teams move forward with confidence. Immigrants, she observed, often excel at personal disruption precisely because they’ve already navigated profound life transitions.

Dean Carter and Mark Levy: From HR to Employee Experience Design

Drawing from their leadership roles at Airbnb and Patagonia, Dean Carter and Mark Levy described why traditional HR models are no longer sufficient.

Instead of managing policies and processes, organizations must design experiences that reinforce belonging, purpose, and trust—especially as companies scale.

Key principles they shared include:

  • Culture add over culture fit: Hiring people who strengthen and evolve values, not simply mirror them
  • Values-based hiring: Using core values interviews to assess alignment with mission
  • Vulnerable leadership: CEOs who model trust and openness set the tone for the entire organization

Their new book on employee experience design serves as a practical playbook for leaders seeking to preserve culture while growing rapidly.

AI and the Future of Employee Experience

AI was not framed as a threat—but as a design choice.

Carter and Levy emphasized that AI’s impact on work will depend on how intentionally leaders deploy it. While AI can automate tasks and surface insights, it cannot replace belonging, meaning, and human connection.

The challenge for leaders is to ensure AI augments human potential rather than diminishing it. That means investing in people, rewarding good thinking, and designing systems where technology supports—not substitutes—human judgment.

Key Takeaways

  • Employee experience design is replacing traditional HR as a strategic leadership priority
  • Personal disruption and the S-curve of learning provide a roadmap for sustainable growth
  • Psychological safety and agency are prerequisites for innovation
  • Culture add hiring strengthens values while enabling diversity and inclusion
  • AI must be implemented intentionally to preserve human agency and purpose at work

Final Thoughts: Intentional Leadership in a Disrupted World

As AI reshapes how work gets done, this DisrupTV episode made one thing clear: the future of work is not just about technology—it’s about design.

Leaders who succeed will be those who intentionally craft employee experiences that foster growth, trust, and meaning. By combining personal disruption, thoughtful culture design, and human-centered AI adoption, organizations can build workplaces that don’t just survive disruption—but thrive because of it.

Related Episodes

If you found Episode 424 valuable, here are a few others that align in theme or extend similar conversations:

 

Tech Optimization Digital Safety, Privacy & Cybersecurity Chief Executive Officer Chief Information Security Officer Chief Privacy Officer Chief Technology Officer On DisrupTV <iframe width="560" height="315" src="https://www.youtube.com/embed/J5CupyHoVng?si=GHB_W8FIfV0UhJBW" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Meta cuts Horizon Workrooms: So long metaverse meetings

Meta cuts Horizon Workrooms: So long metaverse meetings

You'll have to remove that Meta Horizons Workroom meeting from your calendar after Feb. 16, 2026. Oh you didn't have one scheduled? Apparently no one else did either.

Meta said it was discontinuing its Meta Horizon Workrooms as a standalone application. Meta's post on the topic omitted the obvious: Meetings in the metaverse just didn't happen.

Now if you wanted to have a meeting through your Meta Quest headset you could use apps from Microsoft, Zoom and Arthur to conduct a metaverse meeting.

Meta's latest retreat from the workplace comes as The New York Times reported the company laying off workers at its reality labs unit and said that it will stop selling Quest headsets and Horizon services to businesses. Meta exited its Workplace business in 2024 and transitioned customers to Zoom's Workvivo platform.

The company is focusing on its AI efforts as it cuts in its metaverse spending. Fun fact, Meta changed its entire brand to focus on the metaverse in 2021. At the time, Meta CEO Mark Zuckerberg said the "metaverse will eventually encompass work, entertainment, and everything in between."

 

Future of Work Data to Decisions Innovation & Product-led Growth New C-Suite Marketing Transformation Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity meta Chief Information Officer

How Should Executives Think of AI in 2026?

How Should Executives Think of AI in 2026?

Media Name: ChatGPT Image Jan 15, 2026, 08_12_19 PM.png
0

This post is a very short summary of the last three “Enterprise Technology Intelligence Briefing” books. In that monthly report, we track the enterprise technology topics that matter to executives and boards, and lately, it has been AI. Of course.

Much has been discussed, but succinctly, today's situation can be summarized in four points:

  1. Generative AI is reaching (has reached, could be argued) its natural limitations. Both on economic viability and on the potential to address an ever-smaller number of use cases.
  2. Agentic AI has promise, if only core governance issues can be resolved. The most critical ones are cybersecurity, privacy, autonomy, and availability.
  3. The enterprise infrastructure in place today is not ideal. After decades of technical debt sprawl and patch-up jobs, digital integration is the most critical problem.
  4. There is a bright future for AI in the enterprise, if only the multiple proper data models + technology infrastructure combinations can be quickly scaled and adapted to diverse use cases.

While the answer to this quandary will take far longer than 12 months to reach a conclusion -- this is after all a transformative event similar to the internet, cloud infrastructures, and digital transformation -- there are some things enterprises can do today to be prepared for the AI Transformation to come:

  1. Digital readiness
    1. Technology executives must understand and ensure that proper data flows across all stored data are available.
    2. Determine which tools have the necessary permissions and access to the required resources, and how the enterprise can retain control, not the host vendors.
    3. None of these matters if the outcomes are unsustainable. Monitor and enforce the economics of accessing, using, updating, and storing the data and processes.
  2. AI optimization
    1. There is not a single AI model or tool that can do it all. Explore Generative and Agentic AI as well as alternative models and solutions for redesigned use cases and processes.
    2. Explore frontier technologies like edge computing to evaluate the likelihood of sustainable AI operations and reduce operational costs.
    3. For cases where Agentic and Generative AI are the right solutions, optimize economics and outcomes by automating while ensuring proper human supervision.
  3. Infrastructure maturation
    1. Audit the technology stack. Update the documentation to focus on how to create a private platform; determine what else is necessary.
    2. Understand current contracts and economics of all cloud providers: vendors, providers, and hyper-scalers. Create a master strategy for private and public clouds.
    3. Document all places where privacy, access, rights, roles, and compliance are necessary, and centralize it into one common policy place. This is the base for the private platform.
  4. Strategic thinking
    1. As with all transformations, the technology is secondary to the strategy. Create one-year, three-year, five-year, and long-term plans for AI and digital needs.
    2. Determine the available and necessary talent to deliver on those strategies. This is the most common overlooked aspect of strategy creation: how to deliver them.
    3. In a recent study, actually, a few of them, the most striking finding was that executives don’t believe their fellow executives and their boards are sufficiently tech- and AI-savvy. Understand your enterprise situation.

If you are wondering how my organization can do all this over the next 12 months, the good news is that the next 12 months (2026, basically at the time of this writing) is about preparing for the long haul, not achieving all these outcomes. Audits, documentation, strategy alignment, and long-term planning are to be built on top of these blocks, and the enterprise will begin to lay out in the coming year. And maybe by then, geopolitical and economic uncertainties holding back the enterprise from investing heavily will become clearer, revealing their direction and outcome.

The north star for the enterprise's focus on AI in 2026 is adopting a private platform approach to the cloud, the internet, data, AI, and all things technology and processes. A combination of private- and public-cloud components that will enable the enterprise to leverage cloud providers and hyperscalers while retaining control and governance, the private platform is the core model on which AI (and Quantum, Robotics, and other technological evolutions in the next few decades) will be based.

The biggest concern? Creating the right strategy and ensuring Executive Tech Know-how is up to date. Secondary? Resources and talent.

Want to go deeper into these topics? Will continue doing so in this blog and in the ETIB reports throughout the year. Talk to us, we can help you.

To a great 2026! May your strategies align.

Board Strategy New C-Suite Digital Safety, Privacy & Cybersecurity Chief Analytics Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief Supply Chain Officer Chief Sustainability Officer Chief Technology Officer

AWS European Sovereign Cloud available with EU AWS Local Zones on deck

AWS European Sovereign Cloud available with EU AWS Local Zones on deck

AWS European Sovereign Cloud is generally available and the cloud provider is planning to expand from Germany to Belgium, the Netherlands and Portugal.

The launch of AWS European Sovereign Cloud is a milestone for Amazon and its plans to invest more than €7.8 billion in the effort.

AWS is focusing on sovereign cloud as various countries adopt data residency regulations. AWS European Sovereign Cloud is physically and logically separate from other AWS regions yet offers all the services as other clouds.

According to AWS, EU customers will have complete control over the location and movement of data. These customers will also have low latency via a local cloud.

The plan going forward is to connect AWS European Sovereign Cloud to AWS Local Zones to give customers options to deploy workloads with sovereignty and operational independence with all of AWS services. AWS Local Zones enable enterprise to store data in a geographic location for data residency requirements.

AWS noted that AWS European Sovereign Cloud has a dedicated governance structure and is run by EU citizens. Launch partners include Accenture, adesso, Adobe, Arvato Systems, Atos, Capgemini, Dedalus, Deloitte, Genesys, Kyndryl, Mistral AI, msg group, Nvidia, SAP, SoftwareOne and others.

Constellation Research analyst Holger Mueller caught with Mustafa Isik, Chief Technologist Sovereignty at AWS. Here's a look at the takeaways on AWS' sovereign cloud plans in Europe and beyond.

AWS European Sovereign Cloud. Isik said AWS has been in Europe for years, but the focus is now on taking infrastructure that has solved problems for customers for years and rebuilding it in the EU. AWS European Sovereign Cloud is based in Brandenberg and the location is the equivalent of the North Virginia location in North America. Isik said AWS will expand its sovereign cloud footprint across Europe in Belgium, the Netherlands and Portugal. "With AWS European Sovereign Cloud, you get a full blown AWS region," said Isik.

A sovereign data and cloud and local talent. Isik noted that AWS has built AWS European Sovereign Cloud with local talent. "We have hired many new colleagues and enabled them. These are system engineers, administrators, software developers, and across the board in terms of IT jobs," said Isik. "These colleagues that we have hired within the EU, are EU residents, and can only assume their operational role while they're physically present in a member state of the EU. These people are the only ones who will be operating the infrastructure and the services of the European Sovereign Cloud.

AI in Europe. AWS European Sovereign Cloud will have all of the AI and machine learning services found in any AWS cloud including Amazon SageMaker, Amazon Bedrock and the full suite of infrastructure offerings. "It comes with AI services from day one and that's a powerful proposition that customers from highly regulated industries and public sector have been waiting for," said Isik. He added that AWS European Sovereign Cloud has its own legal structures for handling data and customers can opt out of data used for inference, tuning and other use cases.

 

Data to Decisions Tech Optimization amazon Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

CEOs take control of AI projects: What could go wrong?

CEOs take control of AI projects: What could go wrong?

CEOs are now driving AI strategy at enterprises and half of them think their jobs are on the line if they don't get it right.

That rather stressful reality comes from a Boston Consulting Group survey on enterprise AI plans.

  • 72% of CEIS say they are the main decision-makers on AI.
  • Half of CEOs think their job depends on getting AI right94% will continue to invest in AI even if it doesn't deliver returns.

  • And 90% of CEOs think AI agents could produce measurable returns in 2026.
  • Leaders confidence in AI is higher in India and Greater China than in the West.

BCG said in a report:

"About 90% of CEOs believe that by 2028, AI will redefine what success looks like within their industry. Companies will move beyond simply deploying AI in everyday tasks to reshaping critical workflows and, for many, inventing entirely new business models."

Constellation Research analyst Esteban Kolsky noted in his latest boardroom missive that AI success has become an initiative that has moved up the C-suite. He said:

"AI is a transformation initiative for the enterprise for the next 20 years and is itself being transformed from the basic “GenAI using public models” pilots to a more complete organizational redesign strategy. This transformation will take more than a decade to materialize, with quantifiable steps along the way. Its contextual significance is represented in the boundaries and connections boards and executives must place on AI starting now: It is about the outcomes, it is about
leveraging the private platform infrastructure IT is building, and it is about optimizing (and redesigning) the processes that are used to run the business via a control plane."

What could go wrong with CEOs making the AI calls? A few ideas:

  • First, CEOs in this BCG report look a bit overconfident. CEOs have stronger conviction on AI than their technology executives.

  • Technology executives that raise concerns may be benched by CEOs and lead to vulnerabilities.

  • Agentic AI is still immature and requires technology expertise and architecture to work well.
  • CEOs may be driven by short-term results and forget to play the long game.
Data to Decisions Chief Executive Officer Chief Financial Officer Chief Information Officer

Wikimedia Enterprise adds more LLM providers

Wikimedia Enterprise adds more LLM providers

Wikimedia, the foundation behind Wikipedia, said the company has added Microsoft, Mistral AI, Perplexity and others as Wikimedia Enterprise partners to join Amazon, Google and Meta. The Wikimedia Enterprise traction highlights how human-driven content carries a premium.

The additional customers of Wikimedia Enterprise highlight the importance of Wikipedia for training large language models. Wikimedia Enterprise provides the datasets of Wikipedia and its sister projects as well as enterprise APIs that include snapshots, on-demand pulls and streaming updates.

Wikimedia Enterprise along with donations and other fundraising supports its human contributors as well as technology infrastructure. The non-profit disclosed the Wikimedia Enterprise customer additions in a blog post outlining its 25th birthday.

Human-driven content has also driven the success of Reddit, which has LLM partnerships with Google and OpenAI. Reddit has become the No. 4 largest web property.

On Reddit's third quarter earnings call, CEO Steven Huffman said the company's LLM relationships are "healthy and collaborative" and mutually improve each other’s products.

Jennifer Wong, Reddit Chief Operating Officer, summed up the importance of human content. "I think Reddit's corpus of information is clearly incredibly valuable and helpful to LLMs because it's human conversation that's fresh, it's authentic. It's just distinctive. There's nothing like it. And we know that LLMs appreciate Reddit's conversation," said Wong.

She added that Reddit is focused on developing its business on its own property. LLMs are a secondary product. "What the marketers are getting is real value from the Reddit platform itself in terms of converting that engagement to real business outcomes for them. It just so happens that, that environment of that conversation is also appreciated by LLMs," said Wong.

Data to Decisions Chief Information Officer