Results

Intuit, OpenAI forge integration pact: Why it could be a $100 million win win

Intuit and OpenAI announced a multi-year $100 million contract that will integrate ChatGPT with Intuit TurboTax, Credit Karma, QuickBooks and Mailchimp. Intuit will also expand its use of OpenAI models in its genAI operating system called GenOS for use with its AI agents. 

The partnership is a win for OpenAI in that it will expose ChatGPT's integrations to Intuit's 100 million users. OpenAI outlined its plans to leverage ChatGPT as a front-door to multiple applications with an initial focus on consumer apps, but including business software. For its part, Intuit can expose its platform to ChatGPT's 800 million consumer and business users. 

Under the terms of the deal:

  • ChatGPT users will be able to take financial actions via Intuit applications directly within the ChatGPT experience.
  • ChatGPT users often ask financial questions and now can be connected with Intuit's platform and features.
  • Intuit's AI-driven expert platform will also be available within ChatGPT.
  • With permission, Intuit can execute personalized financial actions within ChatGPT for mortgages, taxes and loans. Customers can also schedule time with Intuit's local tax experts.
  • The Intuit-OpenAI partnership also covers businesses for queries about revenue, profitability and accounting.

Intuit CEO Sasan Goodarzi said the partnership will combine "Intuit's proprietary financial data, credit models and AI platform capabilities with OpenAI's scale and frontier models."

For Fidji Simo, CEO of Applications at OpenAI, the Intuit deal will be a good proof point for the AI company's overall application strategy. OpenAI can also expand its footprint at Intuit, which at last check leveraged at least 15 different models to deliver its experience. Intuit also had 2 million customers on its business platform interacting with AI agents just a few months after a July launch.

Intuit has been scaling its AI efforts and has been ahead of both the data and AI curves.

At Intuit's investor day in September, Goodarzi outlined the company's AI strategy. He said:

"We believe that every SaaS company, anybody that makes software is either going to get disrupted or they're going to be the disruptors. And that's because of what's possible with AI. We believe that SaaS players must become the system of intelligence, which means you have to be great at data, data models, data ingestion and AI capabilities to ultimately architect learning systems that learn from customers and deliver the experiences that they are looking for, which means business logic, workflows and the app layer will completely get disrupted."

The partnership with OpenAI is part of Intuit's plan to be a disruptor and meet customers--current and prospective--where they are. Goodarzi outlined the following:

  • Intuit is combining with AI and human intelligence (HI) via its expert network. "Whatever you want to engage with the platform on our data, AI and human capabilities can deliver that experience," said Goodarzi. "That is our system of intelligence. That is our strategy."
  • "The 3 bets are: shifting and accelerating, delivering done-for-you experiences. Marketing is done-for-you. Customer management is done-for-you. Cash flow management is done-for-you. Payroll is done-for-you. Your books are done-for-you. Your accounting is done-for-you. Your taxes are done-for-you, all of which never has a dead end because it's AI plus HI, with the massive amount of data we have and the capabilities to ingest data. That's the done-for-you experiences across the entire platform," he said.
  • Money is everything. "People don't do taxes because they love taxes. They want the money. Businesses follow their passion, but cash flow matters. And so we're accelerating all of our investments around money, helping you get immediate access to your refund, helping you with how to grow your refund, helping you with how to grow your savings," said Goodarzi.
  • Long-tail data. "To win in the era of AI, data matters. And I'm talking about long tail of data. When you see here that we have 625,000 data points per business or over 70,000 data points for consumers," said Goodarzi. "Data is our advantage. We have incredible data, but more importantly, we have built data services that can ingest the data from anywhere."

 

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity openai ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Anthropic, Microsoft Azure, Nvidia ink $30 billion compute pact

Anthropic will continue to diversify its cloud infrastructure for its Claude workloads with an agreement to purchase $30 billion in Azure compute. In addition, Nvidia will invest up to $10 billion in Anthropic and Microsoft can invest up to $5 billion.

The latest deal for Anthropic, which also expands its Claude model distribution via distribution in Microsoft Foundry. Customers of Microsoft Foundry will get access to Anthropic's Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5. For Anthropic, its Azure deal means Claude is the only top model on the big three cloud hyperscale cloud providers.

Microsoft announced the Anthropic and Nvidia deal as it kicked off its Ignite conference, where it launched Agent 365 and a parade of AI agents along with data fabric updates.

Anthropic has expanded its infrastructure of late and now has deals to build its own data centers and agreements with the big three cloud hyperscalers.

This latest agreement with Anthropic and Microsoft along with Nvidia highlights the circular economy that dominates AI. Anthropic will buy Nvidia GPUs via Microsoft Azure, which will expand model choices with an expanded deal with the LLM provider. In addition, Microsoft and Nvidia will invest in Anthropic. Amazon and Google are already Anthropic investors.

According to the companies, Anthropic and Nvidia will collaborate on design and engineering to optimize price performance. Anthropic has expanded its compute with custom processors from AWS and Google Cloud. Those deals could mean Nvidia takes a back seat over time. Now Nvidia is in the mix going forward. Anthropic said its initial commitment of one gigawatt of capacity with Nvidia Grace Blackwell and Vera Rubin.

Data to Decisions nvidia Microsoft Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Google launches Gemini 3, Google Antigravity, generative UI features

Google launched Gemini 3, its latest model, across its Gemini app, search services, Gemini API, AI Studio and Google Cloud's Vertex AI and Gemini Enterprise. The company also highlighted how Gemini 3 can provide answers with a generative visual user interface.

Gemini 3 has better performance and capabilities compared to Gemini 2.5 Pro. For instance, Gemini 3 scores well ahead of its predecessor on benchmarks such as humanity's last exam, visual reasoning puzzles, scientific knowledge and coding.

In addition, Google launched Google Antigravity, a developer environment for MacOS, Windows and Linux. Google Antigravity is an effort to "push the frontiers of how the model and the IDE can work together," said Koray Kavukcuoglu, CTO of Google DeepMind and Chief AI Architect at Google, on a briefing.

Kavukcuoglu said Google Antigravity aims to do the following:

  • Offer a dedicated agent interface that give developers the ability to use autonomous agents that can operate through the code editor, terminal or browser.
  • It can build an application from a single prompt, create subtasks and then execute.
  • Antigravity creates progress reports, task and verify work inside Chrome. It will also present a walkthrough of how the final product works.
  • Antigravity will be in public preview and available on Mac, Windows and Linux.

Combined with Gemini 3, Antigravity can "use these models to learn, build and plan anything they want to do."

While Google Antigravity appeals to enterprises and developers, the big takeaway from the Gemini 3 launch is that the Google can deliver new models at scale across its platforms all at once. The Gemini app now has more than 650 million users per month and Gemini has more than 13 million developers.

The main theme from Google isn't just how Gemini 3 performs against benchmark, but its ability to make its answers accessible. Tulsee Doshi, Senior Director of Product Management, Gemini at Google DeepMind, said "Gemini 3 is responding with a level and depth and nuance we haven't seen before" because it can seamlessly transform information from multiple formats.

Doshi also said Google will launch Gemini 3 Deep Think, its enhanced reasoning model to safety testers before broad availability with Google AI Ultra subscribers. The goal for Google is to ensure Gemini 3 evolves from an answering tool to a partner that can help you "learn a new skill, build a creative project or just plan your life."

Here's what stood out for Gemini 3.

  • Visual Layout. In the Google Labs, Gemini 3 can spin up visual layouts with its answers. If you type in a question like how you plan a three-day trip to Rome, you'll get good UI design instead of just text. Gemini 3 can create interactive widgets, options to go on different scenarios and images, tables and text.

  • Dynamic View. Gemini 3 can use its coding ability to create interactive experiences on the fly. "When we talk about multi-modality, it's not just about how Gemini 3 can understand input, it's also how it can output things in entirely new ways," said Josh Woodward, VP, Google Labs, Gemini and AI Studio. Dynamic View will also be in Google Labs to collect feedback.

  • Gemini Agent, which is a feature that can take a question and break down a plan. For instance, you can ask Gemini Agent to control your inbox, and cluster tasks and messages. "We think it's our first step to a true generalist agent that will be able to work across your different Google products," said Woodward.

Most folks will wind up experiencing Gemini 3 in AI Mode search. Google executives said the goal is to be able to ask natural language nuanced questions without knowing keywords. You'll be able to ask about pictures. Google's AI Mode will be able to route to models based on difficulty.

Constellation Research analyst Holger Mueller said:

"Google keeps innovating fast with Gemini 3 shipping with better reasoning and better coding. Google is also making Gemini 3 available across the Google offerings from search to Vertex AI either immediately or in the near future. Google heavily leverages its lead in multimodal AI and Gemini shows that. Users and tasks seamlessly flow between different modalalities. Google also released its agentic framework Antigravity for developers – with the agent running more autonomously than ever before. Google continues its lead as a multimodal reasoning and coding model with Gemini 3."

For enterprises, Google Cloud cited a wide range of Gemini 3 customers including Box, Cursor, Figma, Shopify and Thomson Reuters.

Data to Decisions Future of Work Next-Generation Customer Experience Google Chief Information Officer

Microsoft launches Agent 365, a parade of AI agents at Ignite 2025

Microsoft launched Agent 365, which is designed to be the control plane for native and third party AI agents, as the company aims to be a horizontal as well as vertical play. The other key theme is that Microsoft is arguing that the best way to incorporate AI agents into your business is to extend the infrastructure you already have.

At Ignite 2025, Microsoft positioned itself as a cure for AI agent sprawl, governance and security. In a blog post, President, Business & Industry Copilot at Microsoft Charles LaManna said the company is trying to solve an enterprise challenge: "How to manage and govern agents responsibly and at scale, without rebuilding the trusted systems they rely on."

LaManna argued that the best way to manage AI agents is the same way you manage people including the systems you're familiar with.

This theme is common among enterprise software vendors, which are adding Model Context Protocol (MCP) servers at a rapid clip and making the case that their platform can manage their native agents as well as third party digital workers. Microsoft's core advantage is that it doesn't have to convince enterprises that it can play the horizontal as well as the vertical and function-specific game.

Agent management and governance has been a big blocker for AI agent deployments at scale. Agent 365 is available through Microsoft's Frontier program, which gives customers early access to AI innovation.

According to Microsoft, Agent 365 does the following:

  • Customers can secure, deploy and control AI agents across Microsoft platforms, open source frameworks and third party tools.
  • Agent 365 will be an enabler across multiple Microsoft applications ranging from Word and Excel to Dynamics 365, Defender, Entra and Purview.
  • Microsoft is provided unified observability across AI agent fleets via telemetry data, dashboards and alerts. IT leaders will be able to track every agent being built, used and bought.
  • Agent 365 includes five core functions: Registry, Access Control, Visualization, Interop and Security.

Microsoft is leveraging its ecosystem to ensure Agent 365 can manage all agents including those from Adobe, Databricks, ServiceNow and SAP to name a few.

Agent 365 can be enabled in the Microsoft 365 Admin Center.

Constellation Research CEO R "Ray" Wang said:

"Customers seek a place to manage their agents and often start with the vendor with the largest footprint in their tech landscape. Agent 365 gives Microsoft users a place to start the process of registering, managing, securing, and orchestrating their agents. As organizations mature in their use of agents across end to end processes, they will then move to a cross-platform, multi-agentic model."

Agent 365 is clearly the headliner at Ignite, but Microsoft's Book of News is packed with database updates including Azure DocumentDB and Azure HorizonDB, a PostgreSQL cloud database now in private preview, Microsoft Dataverse integration across Microsoft Fabric and more. Microsoft Foundry will have a unified catalog of MCP tools to enrich agents. Microsoft also announced Fabric IQ, a unified semantic layer used in Power BI, across its platform. Foundry IQ is a next-gen retrieval augmented generation offering.

Simply put, nearly everything Microsoft announced at Ignite--even secure Edge browser updates, Windows features and .NET app modernization had a hook into AI agents. OK, Microsoft's launch of its next-gen CPU, Cobalt 200, didn't have a direct tie to AI agents, but you get the idea.

If you were to sum up Ignite in three words though it would be agents, agents, agents (preferably in the voice of Jan Brady). Microsoft's AI agents touch everything from personal productivity to process transformation to workforce augmentation.

With Agent 365 as the control plane for agents, it's not surprising that Microsoft unveiled multiple agents across its platform. Microsoft is using Work IQ, its intelligence layer that powers Microsoft 365 Copilots and agents, to scale agents.

The company said Workforce Insights, People and Learning Agents are generally available. The Workforce Insights Agent provides managers with insights into how teams are doing. People Agent is designed to find people in an organization based on role, function or skill. Learning Agent specializes in microlearning, tips and curated courses.

Microsoft's Sales Development Agent, now available in the Frontier program, indicates where the company's AI agent strategy is headed. Sales Development Agent is autonomous and will search for, qualify and engage leads. The agent will also research prospects, create personalized outreach and follow up with the ability to hand off to a human.

Constellation Research saw a demo of Sales Development Agent and it was clear that it could alleviate multiple sales pain points.

Here's Microsoft's kitchen sink of agents launched at Ignite.

AI business and productivity agents powered by Work IQ

  • Word Agent: Organizes complex information into clear documents such strategic plans and policy briefs.
  • Excel Agent: Turns data into charts, summaries, and insights using built-in formulas and logic.
  • PowerPoint Agent: Builds presentations with storytelling and visual structure with conversational interface.
  • Sales Development Agent: A fully autonomous sales agent that will research, qualify, and engage leads to drive revenue growth.
  • Workforce Insights Agent: Gives leaders and managers comprehensive real-time insights into their team across roles, tenure, and location to help make data-driven workforce decisions.
  • People Agent: Helps users find people in their organization based on role, function, or skill and offers suggestions on how to connect with colleagues.
  • Learning Agent: Delivers personalized microlearning experiences, tailored tips, and curated courses to help employees build role-specific and AI skills.

IT, admin and security agents across Microsoft 365

  • Teams Admin Agent: Automates and streamlines administrative tasks in the Microsoft Teams admin center, such as meeting monitoring and user provisioning.
  • SharePoint Admin Agent: Uses AI-driven insights and automation in the SharePoint admin center.
  • Change Review Agent (in Intune): Analyzes change requests in context, checking for risks, conflicts, and compliance before deployment.
  • Policy Configuration Agent (in Intune): Accelerates policy creation by capturing intent from uploaded documents or natural language requirements and mapping them to recommended settings.
  • Device Offboarding Agent (in Intune): Uses activity signals to suggest which devices should be removed and streamlines offboarding.
  • Conditional Access Optimization Agent (in Entra): Ensures the right protections are applied to the right users.
  • Identity Risk Management Agent (in Entra): Investigates and remediates risky users with intelligent insights and recommendations.
  • App Lifecycle Management Agent (in Entra): Automates discovery, onboarding, monitoring, and remediation of apps across the environment.
  • Access Review Agent (in Entra): Streamlines user access and permissions reviews and quickly acts on recommendations with AI-powered insights.
  • Data Security Posture Agent (in Purview): Helps admins proactively manage risk by discovering sensitive content, assessing posture gaps and improving policy hygiene.
  • Data Security Alert Triage Agent (in Purview): Enables analysts to triage, prioritize, remediate critical alerts and automate incident response.
  • Threat Hunting Agent (in Defender): Orchestrates full threat-hunting sessions through natural language, allowing analysts to ask questions and receive summarized answers with underlying KQL queries.

To help build these agents, Microsoft added a series of new tools in Microsoft Copilot Studio including:

  • Agent evaluations, automated tests to measure performance across predefined scenarios and criteria. Admins and builders will have the ability to compare agent performance.
  • Computer use to enable agents to automate tasks across apps and websites.
  • Real-time monitoring during agent runs.
  • Agents built in Copilot Studio will each get an Entra Agent ID.
  • Agents created in Microsoft 365 will be able to create files and access context from Teams meetings, calendar data, directory information, OneNote and shared mailboxes.

More:

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity Microsoft ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Zoho One gets new UX, moves closer to business operating system vision

Zoho One, a suite of 50 applications available for $37 a month, is getting a new user interface that integrates the business software buffet into one platform.

Instead of using Zoho One's applications individually--75,000 customers on average use 22 applications--Zoho is pulling them into one interface leveraging context, data and AI.

The new interface sets up Zoho One to be more of a platform for businesses. Zoho Evangelist Raju Vegesna said the new interface makes it clear to customers that they "are not licensing apps with Zoho One. They are licensing peace of mind."

According to Vegesna, who provided a demo of the new Zoho One interface, users can pull in application functionality and features on one screen without worrying about switching software.

"What if we can take all of these applications and make them behave like a single application, where the context becomes the king?" said Vegesna. "Of course, these are not restricted to Zoho applications. Users can bring in any third-party applications in here, because we also support single sign-on. You can plug in a third-party application and make it behave like a first-party application."

Indeed, the unification of experience, integration and intelligence does get Zoho One closer to Zoho's business operating system vision. From here, Zoho can continue to add intelligence to Zoho One.

Here's what you need to know.

Zoho One UX

Apps in Zoho One are grouped into Spaces across the top toolbar including Personal, which includes apps unique to the user such as productivity software, Organization, which has company-wide communication, and Department, functions such as HR, Marketing and Finance.

Spaces can be customized with automated actions and workflows.

Zoho also is aiming to provide the user access to everything you need to know on one screen. UX features include:

  • Action Panel, which surfaces upcoming meetings, unread messages and emails and tasks in one view with easy navigation.
  • Dashboards and Boards, which consolidates data from all connected apps and third-party systems in one location via widgets. Customers can build custom dashboards from specific apps.
  • The addition of Vani, a visual-first intelligent virtual space for brainstorming and planning.

Integrations

Zoho One is natively integrated with Zoho apps and third party software. Zoho One also uses the Zoho platform for security features including smart offboarding, device management and encryption key support.

Key integration features include:

  • Monitoring via an integrations panel where users can create integration flows and monitor usage.
  • Unified Portal, a custom space where users can consolidate all of their application-specific portals from Zoho and third parties.
  • Integrations for support tasks such as domain verifications.
  • Outcome-based integrations for workflows that require multiple steps and apps. Zoho One has a Smart Offboarding tool that handles department transfers, manages device data and takes care of user application data from one screen.

Intelligence and context

Zoho One includes Zia, Zoho's AI assistant, available throughout the interface with the ability to aggregate and provide context across platforms such as Google Workspace. The federated intelligence is designed to break down data silos, speed up decisions and improve productivity.

"We are embedding Zia contextually so that user does not even know that they are using AI here," said Vegesna.

Other features include:

  • Zia Hubs, a content management system, has its own space in Zoho One across workflows that feature Zoho Sign and recorded Zoho Meetings conversations. Data automatically goes to Zia Hubs so they can be surfaced in Zia Search.
  • Ask Zia is available throughout Zoho One from the bottom toolbar.

Constellation Research's take

Here are my thoughts on Zoho One's UX.

Although crowded in spots, Zoho One's applications are all pulled into tabs and the integration of the platform is a plus. Zoho is likely to see more usage across applications and Zia, which could emerge as the primary navigation feature.

For power users, the new interface for Zoho One is going to be a win. What remains to be seen is whether users that are used to just a few Zoho applications become overwhelmed.

One thing is clear, the new interface for Zoho One is just the start and the next few installments are going to be worth watching. It's possible that other software vendors follow suit in deconstructing individual apps and pulling them into one interface.

You can expect Zoho to leverage Zia and smart workflows to bring applications, data and AI together into one package. Zoho is likely to deliver more value to users with the combined package of Zoho One, which was already a great deal at $37 a month.

Constellation Research analyst Liz Miller said:

"Businesses of all sizes and scale are struggling to battle the complexity and chaos of fragmented technology that results in inefficiency and subpar experiences. Opinionated, intentional and holistically integrated platforms don’t just transform how technology can be focused on delivering expected outputs that drive growth but increasingly, can be set to the task of optimizing the business outcomes everyone from the individual contributor to the executive strategist expects."

 

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth New C-Suite Tech Optimization zoho Chief Executive Officer Chief Information Officer Chief Experience Officer

Why Dashboards Die and Decision Loops Win

TL;DR

Dashboards were built for a world where humans were the throughput constraint.

Decision loops are built for a world where machines are the throughput constraint.

Decision loops expose decision debt, eliminate inconsistency, scale judgment, and create learning systems, which is why Decision Velocity (speed × accuracy × effectiveness) is quickly becoming the new measure of AI ROI and yardstick of AI initiative success.

This article launches a larger series on how enterprises operationalize AI built from my latest research report, Decision Velocity in the Agentic Era: Architecting for Decision Automation.

AI Needs a Job Description, Not Another Playground

Enterprises have spent the past 18 months experimenting with AI. Copilots … piloted, Chatbots stitched across siloed tools, and Models with no guardrails or owners.

But boards and CFOs now demand exponential efficiency and thinking at the right scale when it comes to leveraging AI, not more playgrounds. AI needs a job description.

What decision does it own? What outcome does it drive? What guardrails apply? What context is required? What measures apply?

Dashboards can’t answer those questions. Decision loops can.

1. Dashboards Describe. Decision Loops Decide.

Dashboards were revolutionary in an era where the human was the throughput constraint. Dashboards provide visibility, KPIs, and a stable understanding of what happened.

But today, the constraint has flipped. Even as AI-augmented analytics deliver the very necessary accelerated insights, every CDAO and CAIO I speak with says the same thing:

“We don’t struggle with insight. We struggle with action.”

Signals arrive too quickly, exception scale too widely, and the market demands more.

That's why decision loops become the backbone of Enterprise AI: signal (sense/learn) → context (understand) → decision (recommend) → action → learning (refine)

Dashboards live in one step of that loop. Enterprises need the full cycle.

2. The Real Bottleneck Isn’t Data. It’s Decision-Making.

OK, data quality is key, but most enterprises aren’t data-poor. They’re decision-poor.

The operational gap shows up everywhere: fraud signals detected too late, churn alerts ignored, supply chain delays unaddressed, pricing interventions missed, claims reviews stuck in queues, etc.

This doesn’t happen because the data isn’t visible. It happens because the decision logic is unclear:

  • unclear guardrails
  • tribal knowledge and rules in people's heads and "shadow rules"
  • conflicting interpretations of the same metrics or understanding of what context is missing
  • no clarity on what “good” looks like, or even how we measure/evaluate decision quality
  • brittle rules buried in apps

Dashboards hide the problems. Co-pilots only accelerate awareness, not execution, so they don't help. Decision loops surface problems, measure it, and eliminate it.

This is the first major reframe in enterprise AI adoption. You don’t scale models. You scale decisions and the rest follows.

3. The First Wins Will Be Process Decisions Because They're Observable and Owned

Here’s what the industry is already seeing across early adopters and fast followers achieving ROI: The fastest returns come from operational decisions embedded within processes.

Whether it involves invoice matching, replenishment, credit underwriting, claims adjudication, fraud triage, or customer routing, the shift from pilots to process automation has begun.

Why? Because these drive decision automations that are:

  • measureable
  • repeatable
  • easy to instrument
  • governable
  • and importantly, have clear KPIs ownable by someone

This is why “AI for operations” has already gained mindshare across executive teams, industry event stories, and dominating the early wins. 

4. Governance Isn’t a Tax. It’s Runtime Infrastructure.

Here’s the shift most enterprises haven’t made, but all are discussing: Governance used to be documentation. Now it’s execution. It allows AI to move at machine speed without creating machine-scale risk.

When semantics, constraints, lineage, guardrails, and rules + models + logic is grounded in context and embedded into decisions:

  • overrides become explainable
  • trust scales, allowing greater automation scope
  • straight-through processing increases as errors and exceptions shrink
  • compliance becomes continuous
  • automation becomes safe

This is one of the biggest white spaces vendors are missing to support Enterprise AI, and we see the market already moving fast to try to fill the gap first.

5. Learning Becomes the Loop

Dashboards don’t learn. Co-pilots don’t learn. Decision loops do.

Decision loops are necessarily instrumented to measure decision velocity: every override, exception, confidence score and fairness threshold, guardrail breach, every model drift … and downstream outcomes.

Once captured, what's critical is the speed at which the system incorporates corrections into the next decision. Telementry enables SMEs to refine the “actual” logic to use and improve rules and adjust thresholds, even as data teams tune models and evaluation/drift detection, adjust guardrails, and ultimately rethink and redesign workflows.

The loop improves … and reflects best practices to match the iterative nature of AI-driven/agentic projects.

This Article Kicks Off a Larger Arc. Follow Along.

This post kicks off a broader series on Decision Velocity describing how leading enterprises are moving up the learning curve from insights → action → governed decision automation.

Here’s some of what’s coming:

  • Where to Start: Identifying Low-Hanging, High-Value Decisions
  • Process Automation as the First Big Win
  • Governance as Runtime Infrastructure
  • Decision-Centric Architecture (DCA) blueprint that sits on top of existing systems.
  • Context, Tribal Knowledge & Guardrails
  • From Data Integration & Orchestration to Decision Orchestration
  • Decision Loops & Observability

If you’re a CDAO, CAIO, or vendor building toward the data-to-decision stack, follow along, comment below, or better yet. connect with me and let’s discuss.

I’ll be updating this page with new content, and each part of this arc will link back here so you can jump straight into the components that matter most to you.

You can read more

 

Data to Decisions Tech Optimization Chief Executive Officer Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer

Dell brings more automation to Nvidia AI factory deployments

Dell Technologies said it adding more automation to Nvidia AI Factory deployments using blueprints that automate more than 30 manual steps and can get customers into deployment in as few as 10 clicks.

The news, announced at SC 2025, comes as Dell has landed more than 3,000 customers for its AI factories including Lowe's and Zoho, which uses Dell infrastructure to power its Zia LLMs. Sandisk was also cited as a customer.

Varun Chhabra, a Senior Vice President of Infrastructure (ISG) and Telecom Marketing at Dell, said the company has been working closely with Nvidia to automate, reduce latency and add tools that can move enterprises from proof of concept to production quickly.

"We're really driving down latency, reducing the pressure on GPUs and ultimately costs so customers are able to scale from POCs to production," said Chhabra, who said there's a blend of automation, technology and services needed to get customers where they want to go. See: Dell Technologies ups revenue outlook due to AI infrastructure

Here's a breakdown of Dell's AI Factory announcements with Nvidia, AMD, Cohere and others.

  • Dell expanded its automation platform to streamline enterprise AI deployments. the software-driven tools are coupled with services to validate pilots and move workloads to production.
  • Dell Automation Platform is focused on the Dell AI Factory with Nvidia and features Dell PowerEdge XE7740/XE7745 servers featuring Nvidia RTX PRO 6000 and H200 Tensor Core GPUs.
  • The company said that its Dell PowerEdge XE8712 server has the highest GPU density with 144 Nvidia B200 GPUs per rack. Dell added that it is also integrating its PowerScale and ObjectScale systems with Nvidia's NIXL library, support for Nvidia's Spectrum-X and Red Hat OpenShift. ObjectScale has support for S3 tables and S3 vector for AI native search.

Dell is also expanding its multi-vendor AI factory approaches with the following:

  • Dell PowerEdge XE9785/XE9785L servers feature AMD Instinct MI355X GPUs. The systems can use air or liquid cooling options.
  • Dell PowerEdge R770AP has Intel Xeon 6 P-core 6900 series processors.
  • Dell PowerSwitch Z9964F-ON/Z9964FL-ON switches are powered by Broadcom Tomahawk-6 for AI fabrics and high-performance data centers.

Deania Davidson, Senior Director of Product Planning and Management for Dell's AI Server and Networking portfolio, said the company is leveraging direct-to-chip cooling for consistent performance and energy efficiency.

Data to Decisions Tech Optimization dell nvidia Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Frontline workers get their AI moment

Frontline workers are having a moment as their importance in the age of AI grows across multiple sectors, including manufacturing and industrial enterprises, and organizations flatten. With all the talk of digital workers and AI agents taking jobs, frontline workers may wind up mattering more.

Two industrial conferences last week from QAD | Redzone and IFS highlight the trend, which Constellation Research has analyzed in two recent reports. Constellation Research CEO R "Ray" Wang noted that there's a "once-in-a-generation opportunity to blend autonomous digital labor with human experiences" that will create "a new category of frontline worker productivity."

"Success will require a level of contextual relevancy that is anticipatory. Based on experiences with more than 2,000 global clients, Constellation presents a future framework where services are delivered by hybrid teams of humans and agents. The revolution in frontline workers has begun, and organizations that adapt will thrive," said Wang.

Sign up for Constellation Insights newsletter

Constellation Research analyst Mike Ni said that the new era revolves around decision velocity. "Winners won’t outmodel rivals or win based on the number of pilots launched but, rather, by the recurring decisions automated and the outcomes improved," said Ni. "Enterprises stuck in pilots will fall behind irreversibly within five years. Decision velocity represents the compounding advantage of turning data investments into governed decision services that continuously learn."

Simply put, data to decision velocity likely means that frontline workers become the new lead singers for enterprises. The band is enterprise data and AI (agentic or otherwise). Simply put, the humans in the AI loop are likely to be frontline workers.

Here's a tour of how vendors are trying to address the new frontline worker AI age.

QAD | Redzone

QAD | Redzone's focus is mid-market manufacturers consisting of a platform for enterprise resource planning (QAD), connected workforce software (Redzone) and Champion AI, which is a set of AI agents designed to be connective tissue across use cases.

At the company's Champions of Manufacturing Americas conference in Dallas, CEO Sanjay Brahmawar said the plan is to "bring information and data right into the hands of frontline workers."

With the help of automaton, process optimization and AI agents, frontline workers can make better calls and call the shots in the automation workflow.

Ken Fisher, President of Redzone, said QAD | Redzone's goal with Redzone is to leverage AI and frontline workers as a tag team to create a feedback loop that drives productivity and faster decisions.

There's also a cultural win too. "We provide culture change at scale where operators have ownership. They know what their targets are and what their losses are and they're empowered to do something about it," said Fisher.

IFS

Speaking at IFS' Industrial X Unleased conference in New York, IFS CEO Mark Moffat made the case for physical AI as a way to make industrial operations more autonomous.

IFS announced partnerships with Anthropic, Siemens, Boston Dynamics and 1X Technologies to embed AI, digital workers and robotics into industrial use cases. You'd think that this AI-meets-industrial strategy would mean fewer workers.

Instead, Moffat is betting few if any humans will be replaced. Moffat's take is that AI can retool industrial infrastructure while maintaining jobs. Manufacturing already faces sever worker shortages.

According to Moffat, there are multiple trends pointing to the power of AI and industries including aging industrial infrastructure, labor shortages and retiring expertise and the need for automation and faster decision-making.

And who will make the tough calls in industrial AI? Likely frontline workers with an AI assist.

"AI Applied can embedded in real processes built for the real jobs and the work to be done. Industrial AI applied is about putting that AI capability straight into the hands of workforces, people in the front line. That's where the rubber hits the road,” said Moffat. "Applied AI is different. It's in context. It's in day to day operations. It's built for reality. The requirement for this AI capability needs to be offline, fully in context at all times, and mindful of safety. It's built the people doing the actual work on the ground, running a line, inspecting a substation and keeping people safe."

Jason McMullen, President and CIO for offshore drilling company Noble Corp., said the win for frontline workers and AI is documentation for institutional knowledge. McMullen said that there are only so many electricians and mechanics on a rig and some of them rotate. In other words, some frontline workers may be dealing with an issue they've never seen before.

"Some of these workers are in isolation. Having AI feed data to you to make a decision is critical," said McMullen. "Having a human in the loop is critical for us, but it's also about letting the humans know when they need to be in the loop.”

UKG

Perhaps it's easier to connect the dots between AI, automation and human frontline workers in manufacturing and industrial settings, but there's also a play for other verticals such as retail.

UKG at its Aspire conference earlier this month rolled out a series of AI agents and assistants to transform the frontline worker experience.

The company outlined the following:

  • A vision for UKG's Workforce Operating Platform that would use agentic AI to reshape the flow between people, technology and frontline work. The key words for UKG are orchestration, AI and data to enable enterprises to be more proactive.
  • UKG also announced its Workforce Intelligence Hub, which provides end-to-end visibility into frontline operations. The insights in the Workforce Intelligence Hub can better activate AI agents, hone frontline hiring processes and align labor and customer demand via UKG's Dynamic Labor Management.
  • UKG plans to roll out its agentic AI applications throughout 2026.

According to UKG, AI can give frontline workers better experiences by giving them a conversational interface to access schedules, punches, benefits and HR information and payroll. UKG isn't alone. Workday, Salesforce and ServiceNow, which is also a UKG partner, are among multiple SaaS companies are aiming to court frontline workers.

In a 2026 megatrends panel at UKG Aspire, frontline worker engagement was flagged as a critical issue. Frontline workers need engagement and career paths. UKG quoted Ty Breland, CHRO at Marriot, saying that AI isn’t going to replace the human touch — it’s going to bring it forward."

Indeed, Constellation Research's Supernova 2025 awards featured a few finalists including Spacetel, Doctor Care Anywhere and SavATree that delivered ROI by engaging frontline workers.

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Executive Officer Chief People Officer Chief Information Officer Chief Supply Chain Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AI, Biohacking & the End of Competition: Rewiring How We Work and Win | DisrupTV Ep. 418

Inside DisrupTV Episode 418: AI Startups, Biohacking Leadership & the Power of Uncompeting

In this new episode of DisrupTV, hosts Vala Afshar and R "Ray" Wang dive into the rapidly shifting worlds of AI entrepreneurship, leadership science, and the cultural shift from competition to collaboration. Joined by an exceptional lineup—Ed Addison (NC State University), Dr. Scott Hutchison (author of Biohacking Leadership), and Ruchika T. Malhotra (author of Uncompete)—this episode uncovers what it takes to succeed in an AI-driven, human-centered future.

AI Startups, Data Capital & the Evolving Entrepreneurial Landscape

Ed Addison, a professor, entrepreneur, and author, breaks down the seismic shifts happening in startup ecosystems—especially for AI-driven companies. Gone are the days when product alone determined success. Today, intelligence, data capital, and multi-agentic business models increasingly shape the competitive edge.

Addison highlights:

  • AI-first startups face higher barriers to entry and fierce competition.
  • Success rates remain slim: AI startups average ~5% success, and even experienced entrepreneurs face <10%.
  • Data-rich companies will dominate, while traditional VC funding models may fail to fully understand AI-native economics.
  • By 2030, many corporations will manage hybrid workforces of humans + AI agents, requiring new management frameworks.

He also teases his upcoming novel Probability of Doom—a fictional exploration of AI risk, autonomy, and unintended consequences in a hyper-automated world.

AI, Automation & the Future of Corporate Value Creation

The conversation turns to the macro impact of AI on corporations and the workforce.

Key themes include:

  • AI exponentials will soon deliver 80–90% of digital labor, transforming how work gets done.
  • Corporations may see rising profits—but society must confront the resulting employment and opportunity gaps.
  • Businesses will need far deeper knowledge of their customers to deliver differentiated value efficiently.

This segment frames a central question: What is the future of work when AI automates most work?

Biohacking Leadership: The Science of Warmth, Competence & Gravitas

Dr. Scott Hutchison—leadership professor, theater practitioner, and author of Biohacking Leadership—dives into the biological and behavioral science behind how leaders influence others.

Hutchison explains that leadership is not just a skillset; it’s a biological phenomenon shaped by signals our bodies send and receive. Using improv techniques and behavioral science, he identifies 18 key leadership signals, with three standing out:

  • Warmth – emotional accessibility
  • Competence – clear capability and reliability
  • Gravitas – depth, seriousness, and the ability to create shared value

Gravitas, he notes, isn’t natural talent—it can be trained. Leaders who understand and control their biological signals can "read rooms," reduce friction, and create stronger human connection.

His work bridges neuroscience, psychology, and performance to help leaders thrive amid uncertainty and complexity.

Uncompeting: Ruchika Malhotra’s Framework for Collaborative Success

Ruchika T. Malhotra, author of Uncompete, challenges the deeply ingrained cultural norms around competitive hustle.

Instead of fighting for individual advancement, she argues, leaders and organizations should cultivate:

  • Collaboration over isolation
  • Abundance mindsets over scarcity thinking
  • Shared wins instead of individual victories

Malhotra shares:

  • Competition often masks burnout, insecurity, and performative overachievement.
  • We must rethink what “success” means in communities obsessed with productivity.
  • Uncompeting doesn't mean lowering ambition—it means redirecting energy toward collective uplift and healthier outcomes.

Her personal stories—including conflicting parental philosophies, cultural expectations, and the pressure to “perform success”—bring authenticity and relatability to her mission.

Leadership Rewired: What It Takes to Thrive Going Forward

Across the episode, a unifying theme emerges:

Leadership is being rewired on every dimension—biological, technological, emotional, and cultural.

Today’s leaders must:

  • Understand AI deeply enough to guide teams and strategy.
  • Harness behavioral science to build trust and connection.
  • Abandon outdated competition frameworks and embrace inclusion and collaboration.
  • Rethink success as something created with others, not against them.

As R "Ray" Wang notes, these shifts aren’t just trends—they’re becoming prerequisites for organizational survival.

Final Thoughts: Innovation Starts Within

DisrupTV Episode 418 pushes us to rethink how we build companies, lead teams, and define success in an AI-accelerated world. From multi-agentic corporations to biohacked leadership signals to rejecting outdated competition norms, one message stands out:

The future belongs to leaders who combine intelligence (human + AI), emotional depth, and a collaborative mindset.

This episode is a must-watch for founders, executives, analysts, and anyone invested in the intersection of AI and human leadership.

Related Episodes

If you found Episode 418 valuable, here are a few others that align in theme or extend similar conversations:

 

New C-Suite Future of Work Tech Optimization Chief Executive Officer Chief People Officer Chief Information Officer Chief Data Officer Chief Technology Officer On DisrupTV <iframe width="560" height="315" src="https://www.youtube.com/embed/gfzjfSoWfBk?si=ZC31dp0e4glGkLNT" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Fortinet: A Security Platform Story Built on Chips, Fabric, and Patience

I recently spent a couple of days at Fortinet’s analyst summit in Sunnyvale. The conversations with Fortinet’s executive leadership team felt refreshingly grounded. No forced big-bang announcements. Instead, the focus was on how 25 years of engineering work shaped the company’s platform and why those choices matter more now as security shifts toward hybrid deployments and AI workloads.

Many vendors today talk about platforms. Most mean a sales bundle rather than a true platform. Fortinet means shared OS, shared agent, shared silicon, and shared telemetry. That is the foundation it has been building toward for more than a decade, and much of the summit was about showing how that foundation is starting to pay off for customers.

DNA rooted in patient engineering

Fortinet’s trajectory begins with a founder-led culture focused on engineering quality and customer trust. Over time, three characteristics became clear differentiators:

Customer-centricity
The company has prioritized practical adoption paths rather than dramatic rip-and-replace moves. Many platform capabilities are accessible through existing deployments, which supports gradual evolution.

An engineering mindset
Building custom ASICs and a unified OS early on required patience and long-term investment. This approach helped Fortinet avoid short-term pivots and stay focused on performance, integration depth, and scale.

A slower but steadier approach
Fortinet avoided growth-at-any-cost and chose to invest in R&D and mostly organic execution. Today, it serves a large global customer base including a strong presence across Fortune 100 customers and critical infrastructure sectors. The company holds more than 1,300 patents, with a significant number focused on AI.

Strong roots, deliberate expansion

A helpful way to understand Fortinet’s strategy and growth is to look at three decisions that shaped its trajectory:

1) Build security around purpose-built silicon

From the beginning, Fortinet chose to design its own ASICs. This took more time and capital than relying on commodity CPUs, but it gave the company meaningful control over performance and efficiency. Today, as encrypted and east-west traffic grows and AI workloads stress networks, those chips allow customers to inspect and secure more traffic without unacceptable performance or power tradeoffs. Current FortiASIC generations support NGFW, IPS, SD-WAN, segmentation, and SSL inspection efficiently, giving customers scale without adding cost or architectural complexity. This investment is likely to matter even more in a post-quantum world.

2) Treat software as a unifying engine

Fortinet’s core operating system, FortiOS, spans firewalls, SD-WAN, SASE, endpoint, and security operations. Early on, what began as a simple VPN agent grew into a unified agent supporting ZTNA, EPP/EDR, and DLP. Because everything runs on a single OS, Fortinet could expand capabilities without introducing new agents, consoles, or deployment paths. This gives customers a practical way to move toward modern access and endpoint controls using the footprint they already have, reducing integration lift and encouraging natural platform adoption.

3) Build a fabric that connects everything

About 10 years ago, Fortinet introduced its Security Fabric to connect products through shared telemetry, shared policy, and shared analytics. FortiGuard Labs feeds this system with roughly seven billion threat signals per day, which strengthens detection and response across the environment. Since the fabric runs on the same OS, agent and silicon, new capabilities plug in cleanly, giving customers more value as they expand.This is platformization through architecture rather than SKU grouping, helping teams lower operational friction while improving context across networks, endpoints, and cloud environments.

[Source: Fortinet]

Unified SASE: users, branches and factories

Fortinet’s early choice to extend its original VPN agent rather than create new clients over time gives customers a straightforward path to modern access controls. Because the same agent now supports ZTNA, EPP/EDR, and DLP, organizations running SD-WAN and VPN often have the foundations to enable ZTNA with little added effort or cost. This helps them move away from perimeter-based access and toward user, device, and posture-driven policies without a major re-architecture.

That architecture also supports Fortinet’s growing SASE business. With FortiOS and the unified agent working across campus, branch, remote workers, and cloud traffic, customers can use a single approach for secure access. More than 170 global POP locations reinforce this model by placing security inspection and policy closer to users and applications, improving performance and consistency across hybrid deployments.

In operational technology (OT) environments, Fortinet focuses on enforcement through rugged FortiGate appliances, segmentation, and secure remote access. FortiGuard Labs adds OT-specific threat intelligence and protocol coverage, while partners such as Nozomi and Claroty provide deeper domain visibility. This pairing lets customers apply consistent security and policy where they need it, while still benefiting from partners who understand the nuance of industrial networks.

Together, ZTNA, SASE, and OT capabilities extend the same FortiOS + agent + enforcement model from the user edge through factories and field sites. Customers gain coverage without managing separate stacks or fragmented workflows.

One platform, one data lake: modernizing the SOC

Fortinet sees traditional SIEM and SOAR deployments as powerful but difficult for many teams to operate. Most organizations struggle with integration, tuning, and staffing. By anchoring SIEM, SOAR, XDR, and threat intelligence to a single data lake through FortiAnalyzer, Fortinet intends to reduce that burden. Logs and telemetry across the fabric feed a common store, which keeps investigations, reporting, and automation aligned to the same data and context. FortiAI-Assist accelerates investigations and guided response, helping analysts work faster and focus on higher-value decisions.

This approach turns SOC growth into a staged journey rather than a disruptive transition. Customers keep building on the same data foundation as they mature, which helps them get more value from their telemetry while reducing complexity and overhead.

Cloud and AppSec: expanding the surface

Cloud security evolved quickly and created space for companies such as Wiz to gain early traction. Fortinet is now building more depth in this area through a mix of organic development and targeted acquisitions. Lacework adds CNAPP capabilities, and Next DLP strengthens data security. These join Fortinet’s tools such as FortiAppSec and FortiAIGate, which extend protection across web applications, APIs, and AI workloads.

Because these tools operate on the same FortiOS and fabric telemetry, customers can gain cloud and application coverage without managing isolated platforms. This helps them protect critical workloads across hybrid deployments with more consistent control, policy, and context. The opportunity ahead is to deepen visibility from code to cloud to runtime so that teams can follow application, identity, and data signals across environments. Customers want one view of workload posture, API exposure, and data flows, and Fortinet’s architecture gives it a path to grow into that space.

My View: Staying ahead means going deeper

1) Depth in cloud and AppSec needs to accelerate
Fortinet has a strong foundation in network security and enforcement, but customers are shifting toward protecting applications, data, and identities across hybrid multi-cloud environments. Lacework and Next DLP help, and products such as FortiAppSec and FortiAIGate show progress. The opportunity now is to deepen visibility and control from code to cloud to runtime so that security can follow applications wherever they live. Buyers today expect a single context across workloads, APIs, users, and data. This remains a competitive space, and deeper integration here will be important for long-term relevance.

2) Continued ASIC and POP expansion will reinforce the platform
Custom silicon gives Fortinet a real performance and power-efficiency story. As more data and AI workloads move through distributed environments, this advantage can matter even more. The build-out of more than 170 POPs supports hybrid deployments by placing compute and inspection close to users and applications. Continued investment in ASIC capability and POP footprint can strengthen Fortinet’s value in SASE and distributed cloud networking. The combination of silicon and POPs is a differentiator that many software-only security vendors cannot match.

3) The platform value story must become more explicit
The shared OS, agent, and data lake are the core of Fortinet’s platform. The company will benefit from showing where this architecture improves time to value, detection accuracy, and SOC productivity. Buyers continue to debate best-of-breed versus platform. Many still choose a mix of tools because the benefits of consolidation are unclear. Fortinet can win by explaining the practical benefits of shared policy, shared AI, and shared telemetry across endpoints, networks, cloud, and SOC. Customers appreciate simple integrations, consistent workflows, and measurable efficiency gains.

4) Transition to solution-centric and SaaS is healthy but will take discipline
Fortinet is moving from product-centric selling to solution-led outcomes delivered through SaaS and flexible pricing models, including marketplaces. Maintaining the “Rule of 45” during this shift will require careful execution. The company’s founder mentality has historically supported disciplined growth, which helps in transitions like this. The real test will be how quickly customers adopt cloud marketplaces, usage-based pricing, and managed offerings. Trust, quality, and strong economics would be key factors.

5) FedRAMP could open a new growth frontier
Fortinet already sells into state, county, and municipal accounts. Once it achieves FedRAMP, the federal market could unlock meaningful scale. Federal verticals value trusted vendors with performance, power efficiency, and strong OT/critical infrastructure capabilities. Fortinet checks those boxes. The combination of secure networking, SASE, and SOC platform could position the company well when certification arrives. This is likely a multi-year opportunity that could shift the growth mix in a meaningful way.

Final Thoughts

Fortinet has spent years building around a shared OS, agent, silicon and data foundation. That patience is becoming a competitive advantage as organizations move toward hybrid deployments and AI workloads. The company’s platform now spans networking, access, SOC and cloud, with the same architecture running underneath.

There is still meaningful work ahead, especially across cloud and application security. But the direction is clear. Fortinet is evolving beyond a products mindset toward solutions and SaaS models, while staying grounded in engineering fundamentals and financial discipline. If execution keeps pace, the platform’s consistency and breadth will remain a compelling option for customers looking for coherence in a fragmented security landscape.

Tech Optimization Digital Safety, Privacy & Cybersecurity Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Technology Officer Chief Privacy Officer