Results

Clorox to go live with new SAP ERP system, eyes margin improvement

Clorox will go live on a new ERP system in July as it wraps up a multi-year transition to SAP S4/HANA Cloud.

Speaking at an investor conference June 4, Clorox CFO Luc Bellet said the ERP upgrade will enable the company to optimize its operations and grow profit margins. The company outlined its plans to upgrade SAP in 2021 and later delayed the transformation during a cyberattack in 2023. Overall, Clorox has said its final tab for the SAP upgrade will be about $560 million to $580 million based on projections given in February by executives.

During Clorox's second quarter earnings call, CEO Linda Rendle defended the SAP project and noted that it will deliver a strong long-term return. Clorox has been operating on a 25-year old ERP system. When analysts questioned why the ERP transition was so expensive, Rendle said it was the first upgrade in more than two decades and in the greenfield project Clorox also invested in a data lake as well as AI.

ERP was also a big topic for analysts on Clorox’s third quarter conference call in May.

Speaking June 4, Rendle said the SAP overhaul is more than just an upgrade and about process and digital transformation. The company is also overhauling its data infrastructure to be more efficient in the future. The project falls under Clorox's broader transformation effort called Ignite.

"Upgrading ERP systems is expensive and risky. Courtesy of Clorox we now know that an average of $20M+ needs to be put aside to pay for the upgrade. It likely did not help for Clorox to wait for 25 years - but it is a key data point for any CIO out there looking into moving to SAP S/4HANA," said Constellation Research analyst Holger Mueller.

"We fundamentally have changed and will complete a digital transformation. This is not just an ERP upgrade to the next set of software. This is building a complete data infrastructure across the company, changing the way that we do global finance, changing our ERP, and putting a suite of technologies around that in an effort to create value for our company," said Rendle, who added that Clorox is looking to claw back 900 basis points of gross margins lost due to inflation.

Bellet said the ERP transition has been a "very complex undertaking fraught with risk." However, Clorox has seen many of its peers already go through the ERP transition. "While we're not necessarily proud to be kind of last to the game, that gives us a lot of benefits. And we've been working with very capable consultants and have been working with many of those peers. And we've been embedding learning from their past launches, from their past mistakes in our plans. And we've also been working pretty closely with our retail partners, which had a lot of really good suggestions," said Bellet.

Clorox first piloted the new ERP system in Canada before the US launch. In January, the company moved its global finance reporting to the new SAP system. Clorox also built up inventory at its retails to mitigate risks on out-of-stock conditions. Typically, Clorox has an average of 4 weeks of inventory at retailers and it plans to add another 1.5 weeks.

"Once we're past implementation and stabilization, we're quite excited about having a new ERP because it's going to fundamentally modernize the backbone of our operations. Now just that means a lot more opportunity for productivity in supply chain and working capital and in admin," said Bellet.

The CFO added that Clorox will focus on net revenue management, personalization and other processes that will benefit from a clean data core.

Here's what's next:

  • Clorox will go live in July with order fulfillment and order management.
  • Manufacturing facilities will move to the new ERP system over the next six months.
  • The cadence is transition period for the first half of fiscal 2026 and then optimization.
  • Productivity gains will really accrue in fiscal 2027 and fiscal 2028.
Data to Decisions Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Financial Officer Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

HPE launches GreenLake Intelligence, adds AI agents throughout hybrid cloud stack

HPE launched new CloudOps Software, layered AI agents throughout its hybrid cloud stack, unveiled GreenLake Intelligence and expanded its AI factory. In addition, Digital Realty said it would standardize its data center footprint on HPE.

At its flagship Discover 2025 conference, HPE showcased its new branding as well as play for AI workloads. The big picture from HPE is that hybrid IT operations, networking, storage and AI operations can be largely turnkey via AI agents.

Antonio Neri, CEO of HPE, said the new vision for hybrid IT "is fueled by agentic intelligence at every layer of infrastructure." HPE, like Dell Technologies, is seeing increased interest in enterprise on-premises deployments of AI infrastructure.

HPE is looking to play at the intersection of hybrid IT, AIOps and agentic AI.

"

During his keynote at Discover 2025 at the Sphere in Las Vegas, Neri made the following points:

  • HPE has been reimagined to enable new business models and experiences. 
  • "We are HPE and this is more than just a look. We are architects and engineers pioneering the next generation of computing. We are operators and sellers who know your business," said Neri.
  • "We are focused on three key essential building blocks: Networking to connect your data more securely and efficiently. Hybrid cloud to give you the flexibility to run workloads where it makes the most sense. And AI to help you unlock the full value of your data to accelerate outcomes," he said. 

Neri said it's time for a new approach with GreenLake and agentic AI:

"You are managing costs, you are chasing alerts. You are patching problems, infrastructure across silos. It takes many people to hold everything together and managing the complexity leaves little time to focus on the true innovation, but that is about to change agentic AI is fundamentally reshaping how we interact and manage it. We are moving beyond AI that simply analyzes and recommends towards a new intelligent agentic AI admin workforce. These AI admins will continuously optimize your infrastructure and resolve issues, helping you save both time and cost."

Here's a look at the key news items from Discover 2025:

  • The company launched HPE CloudOps Software that will combine OpsRamp, HPE Morpheus Enterprise Software and HPE Zerto Software. The parts of HPE CloudOps Software can be used individually or as part of a suite.

  • GreenLake hybrid cloud platform becomes an AI-powered system with AI agents in multiple roles via GreenLake Intelligence. GreenLake Intelligence uses agents to cut through silos, manual workflows, and optimization hurdles.
  • HPE said GreenLake Intelligence was built to bring a unified operating model across its stack. GreenLake Intelligence will deploy AI agents for sustainability operations, orchestration, FinOps, resiliency, security, observability, networking, storage and compute. These agents will work through a reasoning agent that will manage workloads and optimize the HPE stack. The company added that it will support FinOps capabilities with GreenLake Intelligence to optimize workloads and capacity, provide consumption analytics and forecast sustainability impacts.

  • With CloudOps offering one control plane, HPE said it will roll out its next-gen HPE Private Cloud AI with Nvidia. HPE said its latest private cloud AI stack will include secure, air-gapped deployment, new HPE ProLiant Gen12 configurations with Nvidia Blackwell support, multi-tenancy and a federated architecture to handle multiple GPU generations. HPE is targeting model builders, service providers and sovereign AI.

  • HPE Aruba Networking will be retooled on GreenLake Intelligence and add an agentic AI mesh and networking copilot. HPE Aruba Networking Central will get a copilot that serves as a front-end for root-cause analysis, automated remediation and security issues.
  • HPE OpsRamp Software expands its operations copilot with agents for product help and IT management. OpsRamp will get AI-based alerts, incident management and other tools via GreenLake Intelligence.
  • HPE Alletra Storage MP X10000 will add AI agent features and support Model Context Protocol (MCP) servers natively. That MCP connection will support GreenLake Intelligence, GreenLake Copilot and natural language interfaces to manage data workflows.
  • The company also continues to go after virtualized workloads and said its ecosystem for HPE Morpheus VM essentials has new integrations with HPE Private Cloud, integration with a growing ecosystem and connections to third-party hardware providers. HPE said it added Veeam and Commvault to its backup partnerships. External hardware support branches out to Dell PowerEdge and NetApp gear.

Separately, HPE said Digital Realty, which provides data center capacity, will standardize on HPE Private Cloud Business Edition across more than 300 data centers.

Constellation Research analyst Holger Mueller said HPE's hybrid cloud approach and use of AI agents to manage operations could resonate with enterprises. 

"The AI era comes to all things Ops in IT. And that is a key upgrade to the human operated Ops. When operations run at AI speed inside of the enterprise and outside attacks are powered by AI, then enterprise Ops need to run on AI as well - the sooner the better."

Data to Decisions Tech Optimization HPE greenlake SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Is Salesforce ripe for Serious Disruption? And if so, from Whom?

Even as it disrupts its own pricing model, Salesforce continues to post up strong numbers. But as the macro and micro outlook for tech investment remains somewhat fuzzy – is there an opportunity for some established vendors and some AI-native startups to severely disrupt Salesforce’s dominance in its key categories?

History has told us that when entities are fighting a two-front battle, things can not always go your way. In some ways, Salesforce is dealing with a two-front competitive scenario that it hasn’t really had to worry about in the past. Startups always lacked the breadth of CRM functionality to be a threat, while legacy enterprise vendors were seen as “not cloudy enough” to be an existential threat. But AI and the completion of cloud migration of enterprise platforms has changed all that.

On the enterprise front, Oracle now has a compelling CRM (and broader) application story now that Fusion apps have been well integrated with AI, all on (or off) Oracle Cloud Infrastructure. At the same time, as Microsoft continues to build out its agentic AI story – it may be able to bring its applications upmarket more effectively than in years past. And it is important to note that both of these providers have native ERP functionality that can more seamlessly power agentic AI flows.

And of course, the simple fact that ServiceNow has very explicitly stated they are targeting Salesforce should be a concern. ServiceNow has made significant inroads in building out its own enterprise CRM suite. Couple its modern CRM offerings atop a strong process-oriented platform that has been getting more and more AI added to it, and Salesforce users who need to decide if they stay all-in on Salesforce/Agentforce or re-evaluate alternatives have tough decisions ahead.

On the startup front, more and more companies are building single-use or limited use tools that are agentic and generative AI driven offerings that offer fast setup and quick time to value. These AI native startups cover many areas, but my AI SDR shortlist lists almost a dozen viable alternatives to Salesforce’s out of the box SDR Agentforce agents, just as one example. Across marketing, sales and customer support and success, there are dozens and dozens of startups offering a fast track to AI-powered process automation and cost reduction for Salesforce customers who may or may not have their instances Agentforce-ready.

These startups are most likely not going to “become the next Salesforce” by any means. However, the more these purpose-built AI native CRM and related agentic AI startups disrupt even one area of value delivery for a company like Salesforce, it can add up. And while Salesforce’s agentic AI story should and will be about playing nice with all kinds of other AI agents – the fact is that in some cases these Ai native startups may provide compelling alternatives to one or more use cases that Salesforce also looks to address.

For these startups to really put any real pressure on Salesforce, they need to offer both a clear path to product success that is measured in weeks not months. They need to solve data issues that can be rampant in CRM deployments as well as solve for data issues for Salesforce users not yet familiar with, or comfortable with the Data Cloud price tag. It’s all about value creation and disruption of Salesforce’s perceived value proposition as it brings Agentforce more and more to the foreground.

For both the enterprise head-to-head competitors, and the scrappy startups – AI is the catalyst. And how that AI can access, consume, and provide insights on various data types and sources to provide valuable and actionable intelligence is the key to success. AI is creating multiple inflection points across all of the go-to-market IT stack, and Salesforce is not alone in potentially seeing long time customers reevaluate in the age of AI.

The question remains, what is the next phase for enterprises as we move deeper into the age of AI? Do they look to rely on fewer, broader platforms with embedded AI? Or, do they take a best-of-breed type approach and start working with multiple native AI providers for key use cases to prove out foundational AI strategies? Will Salesforce weather this potential two-front battle as it has before, by making key acquisitions and mastering install base expansion?

Right now we have more questions than answers. But something feels a bit different about this phase of tech innovation, where more entrenched vendors seem more vulnerable than ever before. AI has the ability to expose just which providers are (or are not) ready for the next phase. And it is also quite possible that with the speed and alacrity with which AI is helping us code - the next world beater CRM platform may not even exist today, but as a native AI platform could be built, released and grab market share with alarming speed. 

Next-Generation Customer Experience Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Sales Marketing Tech Optimization Digital Safety, Privacy & Cybersecurity servicenow salesforce Oracle Microsoft ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Revenue Officer Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Teradata eyes on-premises AI workloads in regulated industries

Teradata launched Teradata AI Factory, which aims to target integrated AI systems for regulated industries.

According to Teradata, its AI factory will include its AI and machine learning tools as well as analytics.

The offering is another indicator that enterprise system providers are looking to court companies looking toward on-premises AI workloads. Teradata AI Factory includes pre-packaged Teradata and various tools for compliance. The system also includes Teradata AI Microservices that can run on Nvidia GPUs and Teradata's Enterprise Vector Store.

For Teradata, the move targeting regulated industries such as healthcare, finance and government is about carving out a niche for AI workloads. Dell, CIsco, HPE and SuperMicro are just a few of the companies looking to scale enterprise AI factories mostly powered by Nvidia, but also AMD.

On-premises AI systems are seen as a way to control infrastructure, secure first party data and manage costs. Teradata AI Factory keeps data on an enterprise's infrastructure. Teradata is looking to return to revenue growth. In the first quarter, Teradata reported net income of $44 million, or 45 cents a share, on revenue of $418 million, down 10% from a year ago.

Key components for Teradata AI Factory, which is available now, include:

  • Teradata's IntelliFlex platform along with Enterprise Vector Store is designed for integration of structured and unstructured data.
  • AI Workbench, a self-service tool with access to analytics libraries and function, model lifecycle management and LLM deployment options.
  • Analytics that connect to customer GPUs via Teradata AI Microservices with Nvidia.
  • Native RAG processing in Teradata's ecosystem.
  • Data pipeline support for Open Table Formats as well as internal tools such as Teradata's QueryGrid.

Constellation Research's take

Constellation Research analyst Michael Ni said:

"Teradata’s historic strength has been as the platform of choice for regulated, high-performance analytics behind the firewall. AI Factory doesn’t invent a new advantage—it reactivates one, just as the market is circling back to demand sovereign, secure, high-performance AI on-prem.

This all comes at a time when cloud cost volatility and regulatory scrutiny are rising, prompting enterprises to rethink where their most sensitive and strategic AI workloads run.

AI Factory lets Teradata solidify its existing base in regulated industries while positioning to win new customers who assumed on-prem AI meant stitching open-source tools together, or may not have considered Teradata in the past.

With AI Factory, Teradata brings together high-performance analytics, vector stores, and GPU acceleration—all integrated into an on-prem system that removes the complexity many CIOs associate with custom AI deployments.

Teradata isn’t playing catch-up—they’re positioning themselves for a leader for regulated AI. While Snowflake and Databricks started as cloud native and chased cloud market share, Teradata was born on-prem and built the AI command center enterprises need for a unified hybrid experience across platforms, with GPUs, governance, and guardrails included.

Cloud-native rivals aren’t on-prem ready: Snowflake and Databricks have limited to no on-prem AI deployment capabilities today. Their architectures were designed for hyperscaler elasticity, not sovereign infrastructure.

Teradata never left on-prem: It remained the data platform of record for regulated industries like healthcare, financial services, and telco—where cloud migrations remain slow or selective.

AI Factory is turnkey, not toolkit: While competitors may support hybrid integrations, Teradata is offering an integrated hardware-software bundle tailored for private AI—a step ahead in execution for this use case. Teradata is ahead of other on-premise vendors like IBM or HPE or even Intersystems or Vast.

AI Factory gives enterprises a way to scale AI with an out-of-the-box packaged solution—with governance, performance, and control baked in. What stands out is how Teradata integrated model ops, vector search, and LLM pipelines into a turnkey platform providing traceability and compliance from experimentation to production."

Data to Decisions Tech Optimization Chief Information Officer

Salesforce launches Agentforce 3, Command Center for visibility

Salesforce launched Agentforce 3, which features Agentforce Command Center, support for Model Context Protocol (MCP), and updated Atlas architecture to speed up reasoning and performance.

The rollout maintains Salesforce's Agentforce cadence, which includes updates every few months as the company learns from enterprise use cases. In addition, Salesforce has added more than 30 partners to AgentExchange including AWS, Box, Cisco, Google Cloud, IBM and payments players such as PayPal and Stripe.

Salesforce said the Agentforce 3 updates address a big blocker to implementations--visibility into what agents are doing. Agentforce 3 adds an observability layer and tools to optimize agents. Agentforce launched late in September 2024 and Agentforce 2 followed in December with developer features in March.

According to Salesforce, 8,000 customers have signed up to deploy Agentforce. The Agentforce 3 release is based on feedback from thousands of Agentforce deployments so far. Recent moves from Salesforce include:

Here's a look at the Agentforce 3 updates:

Agentforce Command Center, an observability console, which features support for MCP and more than 100 prebuilt industry actions. Command Center is built into Agentforce Studio and includes:

  • Optimization tools to tweak agents based on visibility into interactions, usage trends and recommendations.
  • Live analytics on latency, escalation frequency, error rates and unexpected actions.
  • Dashboards on adoption, feedback, success rates, costs and topic performance.
  • Integration with Data Cloud, third party observability tools and Service Cloud, which will get a purpose-built version of Command Center.

A new Atlas architecture that will improve latency, accuracy, resiliency and support for native LLMs from providers such as Anthropic.

Industry actions from partners that include flexible pricing.

Salesforce didn't provide a time frame on when Agentforce 3 will be generally available.

Constellation Research's take

Martin Schneider, analyst at Constellation Research, said:

"The new Agentforce Command Center is a must-have as we continue to develop hybrid human/agent workforces. But will be interesting to see how well it can leverage and measure effectiveness of multi-agent flows that utilize agents from other platforms. Salesforce has made all the right partner announcements around helping their customers manage a multi-platform AI strategy but has not explicitly stated how users can access and leverage other product's agents. Perhaps we will hear more on that during Dreamforce, but just like with humans - digital agents need to leverage data and functions from multiple systems, not just the CRM to do their jobs. This must be addressed sooner rather than later. 

It is also good to see more tools for evaluating the effectiveness of Agentforce agents - while it is easy for almost anyone to build and deploy an agent, measuring the efficacy and value these agents are providing is important. Many customers will need to show value now that the pricing and total cost of using Agentforce is becoming more clear. So, if the agents are not pulling their weight, they may be hard to justify especially as a lot of organizations are not ready to downsize the human labor element just yet."

Holger Mueller, an analyst at Constellation Research, added:

"Salesforce keeps moving on Agentforce with never seen before speed, releasing Agentforce 3 quickly after Agentforce tdx. And the lead of Salesforce in agentic AI shows by the vendor tackling V3 challenges - better agent contol and agent testing as well - a first for all vendors - vertical agent automation options. As always - what is new, true agentic era AI innovation, and what is "AI washing" will have to be unpacked in the weeks and months to come."

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity salesforce ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS re:Inforce 2025: GenAI, AI agents and common sense security

Security isn't an issue for securing today's AI agent use cases since the key tools and techniques are already in place. The grand vision for cross-platform AI agents that hop across data stores and processes is going to require more work on the standards and plumbing side.

That's the big takeaway from AWS at its annual security conference in Philadelphia. The conference, which was free of the AI agent-washing we're so used to, was refreshing. After all, we're accustomed to AI agent fairy tales by now from most vendors.

Here's a look at AWS re:Inforce 2025 and my key takeaways.

The AWS security story isn't easy to tell

AWS is a company that has multiple security offerings, but doesn't try to make money from them. Security isn't a business for AWS as much as it is a base layer for everything it does.

The company has started to roll up security building blocks into suites and services, but is primarily focused on the AWS environment. That reality means that the storyline of AWS vs. CrowdStrike vs. Palo Alto Networks vs. Zscaler doesn't exist.

AWS revenue for cybersecurity? Finding that number is almost impossible since security is more feature than product at AWS.

It's hard to even play buzzword bingo for AWS. The analysts at AWS re:Inforce 2025 were all trying to walk away with a grand plan to secure agentic AI. What we got was that AWS is confident that AI agent use cases today can be secured with existing identity and access management technologies. Why? AI agent architecture rhymes with microservice architecture, which is already secured at multiple points. And AWS already gives every compute resource an ID anyway.

In the future, standards like model context protocol (MCP) need more work for security, but that multi-system, multi-cloud, multi-process vision of agentic AI is being baked.

Simply put, the cybersecurity narrative we're all used to doesn't quite apply to AWS. Microsoft has a security business and a product-focused view. Google Cloud has Mandiant, security products and a pending Wiz acquisition to grow revenue. Cybersecurity vendors talk agentic AI, platformization and expanding total addressable markets.

AWS' narrative is like this: Security is in the design of everything we do so developers have building blocks to use. In many cases, security is just a feature. We're not trying to make money on security. We can do a better job of making security services easier for customers to consume, but the parts are there or soon will be.

AWS Chief Information Security Officer Amy Herzog said, "you can't just separate genAI from the rest of the conversation." "The playbook is the same as always. What are you trying to accomplish?," said Herzog. "There are definitely technical challenges that we are starting to get ahead where we might be in a few years. But I think that's a different conversation."

AWS is making its security services more consumable

AWS has a sprawling set of security building blocks, but the news drop from AWS re:Inforce 2025 highlights an emerging theme from the company: It is rolling up its services into suites.

The launch of SecurityHub and AWS IAM Access Analyzer as well as GuardDuty and AWS Shield are examples of making it easier to use various services in one place. "Security Hub combines signals from across AWS security services and then transforms them into actionable insights, helping you respond at scale," said Herzog.

This packaging of disparate yet useful services across AWS picks up on a theme from AWS re:Invent 2024 where the company unified data, analytics and AI under SageMaker. Amazon QuickSight and Amazon Q Business were also were combined for easier use.

Simply put, AWS is keeping small teams to innovate, create new products and run and gun while putting them together for easier consumption too. It's an interesting balancing act.

Securing genAI, AI agents: It's all just security

In many ways, analysts at AWS re:Inforce 2025 were on the hunt for a cybersecurity easy button for agentic AI. AWS didn't take the bait and didn't need to even though analysts weren't pleased. The reality is the industry can secure today's AI agent use cases with existing tools, but this cross industry, multi-vendor, multi-cloud, multi-platform and process army of autonomous agents carrying out work doesn't have open security standards yet.

Eric Brandwine, VP and Distinguished Engineer at Amazon, said: "There are absolutely interesting novel attacks against LLMs, and some of these have been applied to commercially deployed services. But the vast majority of LLM problems that have been reported are just traditional security problems with LLM products. You've got to get the fundamentals right. You've got to pay attention to traditional deterministic security."

Karen Haberkorn, Director of Product Management for AWS Identity, Directory and Access Services, said today's identity services for initial AI agent use cases can deploy existing security offerings. "An AI agent is a piece of software that needs to authenticate to act on behalf of a user. We need to understand your permission, the agent's permissions and ensure the only interactions user are at the intersection," said Haberkorn. "It's a paved path."

That refrain was heard in multiple presentations. Yes, there's securing AI. And there's using AI for security. But for the most part, it's all security. And specifically, it's data security.

"We're seeing a large interest in conversations and adoption around agents, We're seeing at least like 15% or so adoption of agents. So far, we see that number continuing to explode as we evolve, but our vision is to be the most trusted in performance and deploy the most trusted performance agents in the world," said Matt Saner, Senior Manager, Security Specialists at AWS. " We're working backwards from what the customers are telling us they want to use, and that's what we're working to enable for them. Everything we build is integrated and empowered by the underpinnings of our native security services."

Quint Van Deman, Senior Principal, Office of the CISO at AWS Security, said agentic AI is certainly evolving, but all the primitives you'd rely on for security are already in place. "A human is delegating to a service or agent and talking to another service with trusted identity," said Van Deman. "The details are being worked out, but building these things feels very familiar. Agents have identities."

Van Deman said AWS gives an identity to every underlying piece of compute and that could be a way forward to credential agent workflows. Current standards can also be leveraged. "This feels like a new iteration of an old problem and doesn't strike me as net new," he said.

Haberkorn did note that that AWS can do better packaging up security for agent builders "so they don't have to go looking for it."

Where security and agentic AI will become tricky is when there's a constellation of agents in multiple places. There will need to be more standards and guardrails to ensure agents can securely connect and collaborate. Model Context Protocol (MCP) will add in security standards and AWS and other vendors are working on the issue individually. These efforts will have to combine if the autonomous AI agent dream is going to play out.

Haberkorn said there's a lot of plumbing work that must happen to bring identity to cross-platform AI agents. For instance, microservices can only do what the code allows them to do. Agents are more creative and will need guardrails.

"The use cases today are just the beginning of the journey," said Haberkorn. Software development use case for agents, including Q Developer and Q Transformation will likely inform future efforts.

"Shift left"

At AWS re:Inforce 2025, the term "shift left" was mentioned dozens of times. The phrase was uttered so much I thought were in one of those "super" moments when every word ever said would have a "super" in front of it for years.

I found shift left to be annoying after a while--especially since the meaning was kind of vague beyond broad developer-speak. And since re:Inforce was in Philly I found shift left to be as undefined as "Jawn," which I still don't follow even though I'm a native.

Technically, shift left refers to a principle of integrating security, testing and quality assurance earlier in software development. Often, these practices come in at the end of the development process.

In the context of developers and security, AWS' penchant for shift left makes sense. The term has appeared in other tech keynotes and GitLab's most recent earnings call. The big question now is whether shift left becomes a cultural reference. I'm super curious to see how this phrase turns out and happy to double click on it later. See what I did there?

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

Uber AI Solutions expands, targets enterprises

Uber is expanding its AI and data services unit, Uber AI Solutions, as it looks to support labs and enterprises looking to build AI models and deploy agents.

The company is offering the data and AI platform it uses internally to enterprises in a move they rhymes with what Amazon and Google do with cloud computing. Build the expertise and platform for internal use, then turn it into a business.

As previously noted, Uber is more of a data company than one focused on mobility. Uber's expertise is in collecting, labeling, testing and localizing data for its operations and then optimizing interfaces to add value. As generative and agentic AI take hold, these data services matter a lot more.

Uber's core pitch for its platform: "As we’ve scaled Uber to power more than 33 million trips across mobility and delivery every day, we have invested in innovation in product, platform, and artificial intelligence (AI) and machine learning (ML) . To enable these, we’ve created a world-class technology platform that is designed to meet our evolving requirements across data labeling, testing, and localization. We’re now making this available."

Here's what Uber AI Solutions is rolling out:

  • Global digital task platform, which connects enterprises to experts in coding, finance, law, science and linguistics. Tasks include annotation, translation and editing for multi-modal content. Think Uber gigs expanded broadly.
  • Uber data foundry, a service that provides packaged and custom datasets including audio, video, image and text to train large language models (LLMs).
  • Infrastructure for AI. Uber said it is making its platforms to manage data annotation projects and validate AI outputs available to enterprises.
  • An interface designed to "become the human intelligence layer for AI development worldwide." According to Uber, the interface will allow enterprises to describe data needs in plain language for setup, tasks, workflow optimization and quality management.
Data to Decisions Future of Work Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Accenture reshuffles exec deck as Q3 new bookings light

Accenture launched Reinvention Services, a business unit that will bring together its AI assets in one integrated unit. Manish Sharma, Accenture's CEO of the Americas, will become chief services officer.

The launch of a new services unit comes after the company reported better-than-expected fiscal third quarter earnings but showed a decline in bookings.

In addition to Sharma's role change, Accenture said John Walsh, chief operating officer, will become CEOs of the Americas. Kate Hogan, current chief operating officer of the Americas, will take on that same role for all of Accenture. Karthik Narain, Group Chief Executive and Chief Technology Officer, is leaving to "pursue other opportunities."

Accenture CEO Julie Sweet said Reinvention Services will be able to move faster, deliver AI-enabled assets and platforms and embed data and AI in services delivery. The new organizational structure launches Sept. 1.

Other executives in the Accenture reshuffle include:

  • Jason Dess, current lead of CFO and enterprise value, will become group chief executive of consulting. Dess succeeds Jack Azagury, who is leaving Accenture.
  • Song will be led by Ndidi Oteh, who is the lead exec for Song in the Americas.
  • Rajendra Prasad, currently Accenture’s chief information and asset engineering officer, will succeed Narain. Accenture's Karthik Narain on human, AI collaboration, trust
  • Kate Clifford, currently chief HR officer of the Americas, will become global chief leadership and HR officer and succeed Angela Beatty, who is also leaving the company.

Accenture reported third quarter earnings of $3.49 a share on revenue of $17.7 billion, up 8% from a year ago. Generative AI new bookings were $1.5 billion. However, new bookings of $19.7 billion were down 6% from a year ago. In the second quarter, Accenture noted customers were becoming more cautious about projects.

As for the outlook, Accenture said it now expects fiscal 2025 revenue growth of 6% to 7% with earnings of $12.77 a share to $12.89 a share. Fourth quarter revenue will be between $17 billion and $17.6 billion.

Sweet said Accenture had 30 clients in the quarter with bookings topping $100 million. Accenture saw solid growth across its core industries with financial services revenue up 13%.

On a conference call with analysts, Sweet said:

  • "We continue to see a significantly elevated level of uncertainty in the global economic and geopolitical environment as compared to calendar year 2024. In every boardroom and every industry, our clients are not facing a single challenge. They are facing everything at once, economic volatility, geopolitical complexity, major shifts in customer behavior."
  • "We have leaders who leave Accenture and pursue other opportunities. Our leaders are in demand, as you might imagine. And we have a deep bench of leaders."
  • "The GenAI demand continues to be very, very strong. And now it's getting big enough that it's going to fluctuate a little bit. But you'll see GenAI is just being more and more embedded into everything we do."
New C-Suite Data to Decisions Next-Generation Customer Experience accenture Chief Information Officer

Microsoft advances quantum computing error correction, sees on-premise traction

Microsoft said it has developed quantum computing error-correction codes that can create a 1,000-fold reduction in error rates. The company also said it is landing on-premise interest for its Microsoft Quantum compute platform, a collaboration between Microsoft and Atom Computing.

The company said its four-dimensional geometric codes require few physical qubits per logical qubit and can check for errors in a single shot. Error correction is a huge topic in quantum computing and companies are using physical qubits with high fidelities and applying error correction codes to solve problems.

With Atom Computing, Microsoft created and entangled 24 reliable logical qubits. Microsoft used its qubit-virtualization system combined with Atom Computing's neutral atoms. Matt Zanner, Senior Director of Microsoft Quantum, said Atom Computing's neutral atom approach means it can adjust to error correction advances quickly.

Microsoft said that its family of 4D geometric codes are suitable for qubits with neutral atoms, ion traps and photonics. These 4D geometric codes require fewer physical qubits to make each logical qubit, have fast clock speeds and improve the performance of quantum hardware.

The error-correction codes, available in Microsoft Quantum compute platform, will enable the system to deliver 50 logical qubits in the near term and scale to thousands later.

According to Microsoft, its Microsoft Quantum compute platform will include error correction, cloud high performance computing, AI models and the company's science platform, Microsoft Discovery. The system has hardware, software and access to experts to refine quantum computing use cases.

Constellation ShortListâ„¢ Quantum Computing Platforms | Quantum Computing Software Platforms | Quantum Full Stack Players

For its part, Atom Computing is offering the hardware in the Microsoft Quantum compute platform. Atom Computing's approach can scale and work in tight spaces. Zanner also said that error correction codes will be tuned to Atom Computing's hardware.

Zanner said the Microsoft Quantum compute platform is a full stack offering with a Copilot interface and it has been seeing interest for on-premises deployments.

"The interest in Microsoft Quantum compute platform ranges from national quantum programs such as countries or groups of countries that want to be local hubs in region," said Zanner. "Academia is also interested and it's about creating quantum jobs. We're also seeing use cases from individual companies or consortiums to align quantum computing around a specific domain."

Zanner said there is still plenty of interest in quantum computing via the cloud, but he has been surprised by the on-premises approach. "We said we were ready to do a commercial offering and we had a bunch of customer conversations to validate it. And now we're in active conversations with several customers that are interested in pursuing it commercially," said Zanner.

He added that there are perks to having an on-premises quantum computer in that you can do tours with dignitaries and advance collaboration with academics and governments. "There's quantifiable value having a physical demonstration of quantum computing," said Zander.

Recent developments:

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Microsoft Quantum Computing Chief Information Officer

What genAI, cognitive debt will mean for enterprises and future workforce

Generative AI has been seen as a boon for productivity, but it may not be making the workforce any smarter. In fact, enterprises may want to start thinking about cognitive debt from AI usage and a thin bench of critical thinkers.

A study (abstract) from a team at MIT looked at 54 participants using OpenAI's ChatGPT for essays. The participants were divided into brain-only users, search engine users and large language model (LLM) users. The study then used electroencephalography (EEG) to assess cognitive load during essay writing and scored the essays.

The punchline:

"Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning."

Apply this to the workforce and there are multiple threads to ponder:

  • This study was focused on students and those folks will become your managers and executives in the future. If you hollow out critical thinking with AI then you'll have a bunch of know-nothings in the future. You may be trading productivity today for dumbasses in the future. 
  • Executives are telling employees to get on the AI bandwagon and leverage new ways to work. What happens if you introduce cognitive debt to employees with strong critical thinking and institutional knowledge.
  • Generative AI (and the AI agents that will follow) is going to hollow out the bench of employees. It's already a tough hiring season for university graduates as AI eliminates entry level jobs. How will those employees develop in the future?
  • Tests used for hiring should be AI free given the ease of spinning up minimal viable products, essays and code.
  • If you're a worker, know how to leverage AI but don't lean on it too much. Using tools is a balancing act. Think about GPS, which has led to a generation (maybe two generations) that can't read a map. Reading a map old school is still a good brain workout. You may have to go out of your way to exercise your brain just like you do for muscles when you go to a gym.
  • Keep context in mind. AI is no different than smartphones or any other technology. You'll have folks on one side saying the end of society is here. And you'll have optimists telling you a new technology will solve all of your problems. The truth is in the middle.
Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth New C-Suite Tech Optimization Chief People Officer Chief Information Officer Chief Experience Officer