Results

Salesforce launches Agentforce 3, Command Center for visibility

Salesforce launches Agentforce 3, Command Center for visibility

Salesforce launched Agentforce 3, which features Agentforce Command Center, support for Model Context Protocol (MCP), and updated Atlas architecture to speed up reasoning and performance.

The rollout maintains Salesforce's Agentforce cadence, which includes updates every few months as the company learns from enterprise use cases. In addition, Salesforce has added more than 30 partners to AgentExchange including AWS, Box, Cisco, Google Cloud, IBM and payments players such as PayPal and Stripe.

Salesforce said the Agentforce 3 updates address a big blocker to implementations--visibility into what agents are doing. Agentforce 3 adds an observability layer and tools to optimize agents. Agentforce launched late in September 2024 and Agentforce 2 followed in December with developer features in March.

According to Salesforce, 8,000 customers have signed up to deploy Agentforce. The Agentforce 3 release is based on feedback from thousands of Agentforce deployments so far. Recent moves from Salesforce include:

Here's a look at the Agentforce 3 updates:

Agentforce Command Center, an observability console, which features support for MCP and more than 100 prebuilt industry actions. Command Center is built into Agentforce Studio and includes:

  • Optimization tools to tweak agents based on visibility into interactions, usage trends and recommendations.
  • Live analytics on latency, escalation frequency, error rates and unexpected actions.
  • Dashboards on adoption, feedback, success rates, costs and topic performance.
  • Integration with Data Cloud, third party observability tools and Service Cloud, which will get a purpose-built version of Command Center.

A new Atlas architecture that will improve latency, accuracy, resiliency and support for native LLMs from providers such as Anthropic.

Industry actions from partners that include flexible pricing.

Salesforce didn't provide a time frame on when Agentforce 3 will be generally available.

Constellation Research's take

Martin Schneider, analyst at Constellation Research, said:

"The new Agentforce Command Center is a must-have as we continue to develop hybrid human/agent workforces. But will be interesting to see how well it can leverage and measure effectiveness of multi-agent flows that utilize agents from other platforms. Salesforce has made all the right partner announcements around helping their customers manage a multi-platform AI strategy but has not explicitly stated how users can access and leverage other product's agents. Perhaps we will hear more on that during Dreamforce, but just like with humans - digital agents need to leverage data and functions from multiple systems, not just the CRM to do their jobs. This must be addressed sooner rather than later. 

It is also good to see more tools for evaluating the effectiveness of Agentforce agents - while it is easy for almost anyone to build and deploy an agent, measuring the efficacy and value these agents are providing is important. Many customers will need to show value now that the pricing and total cost of using Agentforce is becoming more clear. So, if the agents are not pulling their weight, they may be hard to justify especially as a lot of organizations are not ready to downsize the human labor element just yet."

Holger Mueller, an analyst at Constellation Research, added:

"Salesforce keeps moving on Agentforce with never seen before speed, releasing Agentforce 3 quickly after Agentforce tdx. And the lead of Salesforce in agentic AI shows by the vendor tackling V3 challenges - better agent contol and agent testing as well - a first for all vendors - vertical agent automation options. As always - what is new, true agentic era AI innovation, and what is "AI washing" will have to be unpacked in the weeks and months to come."

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity salesforce ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS re:Inforce 2025: GenAI, AI agents and common sense security

AWS re:Inforce 2025: GenAI, AI agents and common sense security

Security isn't an issue for securing today's AI agent use cases since the key tools and techniques are already in place. The grand vision for cross-platform AI agents that hop across data stores and processes is going to require more work on the standards and plumbing side.

That's the big takeaway from AWS at its annual security conference in Philadelphia. The conference, which was free of the AI agent-washing we're so used to, was refreshing. After all, we're accustomed to AI agent fairy tales by now from most vendors.

Here's a look at AWS re:Inforce 2025 and my key takeaways.

The AWS security story isn't easy to tell

AWS is a company that has multiple security offerings, but doesn't try to make money from them. Security isn't a business for AWS as much as it is a base layer for everything it does.

The company has started to roll up security building blocks into suites and services, but is primarily focused on the AWS environment. That reality means that the storyline of AWS vs. CrowdStrike vs. Palo Alto Networks vs. Zscaler doesn't exist.

AWS revenue for cybersecurity? Finding that number is almost impossible since security is more feature than product at AWS.

It's hard to even play buzzword bingo for AWS. The analysts at AWS re:Inforce 2025 were all trying to walk away with a grand plan to secure agentic AI. What we got was that AWS is confident that AI agent use cases today can be secured with existing identity and access management technologies. Why? AI agent architecture rhymes with microservice architecture, which is already secured at multiple points. And AWS already gives every compute resource an ID anyway.

In the future, standards like model context protocol (MCP) need more work for security, but that multi-system, multi-cloud, multi-process vision of agentic AI is being baked.

Simply put, the cybersecurity narrative we're all used to doesn't quite apply to AWS. Microsoft has a security business and a product-focused view. Google Cloud has Mandiant, security products and a pending Wiz acquisition to grow revenue. Cybersecurity vendors talk agentic AI, platformization and expanding total addressable markets.

AWS' narrative is like this: Security is in the design of everything we do so developers have building blocks to use. In many cases, security is just a feature. We're not trying to make money on security. We can do a better job of making security services easier for customers to consume, but the parts are there or soon will be.

AWS Chief Information Security Officer Amy Herzog said, "you can't just separate genAI from the rest of the conversation." "The playbook is the same as always. What are you trying to accomplish?," said Herzog. "There are definitely technical challenges that we are starting to get ahead where we might be in a few years. But I think that's a different conversation."

AWS is making its security services more consumable

AWS has a sprawling set of security building blocks, but the news drop from AWS re:Inforce 2025 highlights an emerging theme from the company: It is rolling up its services into suites.

The launch of SecurityHub and AWS IAM Access Analyzer as well as GuardDuty and AWS Shield are examples of making it easier to use various services in one place. "Security Hub combines signals from across AWS security services and then transforms them into actionable insights, helping you respond at scale," said Herzog.

This packaging of disparate yet useful services across AWS picks up on a theme from AWS re:Invent 2024 where the company unified data, analytics and AI under SageMaker. Amazon QuickSight and Amazon Q Business were also were combined for easier use.

Simply put, AWS is keeping small teams to innovate, create new products and run and gun while putting them together for easier consumption too. It's an interesting balancing act.

Securing genAI, AI agents: It's all just security

In many ways, analysts at AWS re:Inforce 2025 were on the hunt for a cybersecurity easy button for agentic AI. AWS didn't take the bait and didn't need to even though analysts weren't pleased. The reality is the industry can secure today's AI agent use cases with existing tools, but this cross industry, multi-vendor, multi-cloud, multi-platform and process army of autonomous agents carrying out work doesn't have open security standards yet.

Eric Brandwine, VP and Distinguished Engineer at Amazon, said: "There are absolutely interesting novel attacks against LLMs, and some of these have been applied to commercially deployed services. But the vast majority of LLM problems that have been reported are just traditional security problems with LLM products. You've got to get the fundamentals right. You've got to pay attention to traditional deterministic security."

Karen Haberkorn, Director of Product Management for AWS Identity, Directory and Access Services, said today's identity services for initial AI agent use cases can deploy existing security offerings. "An AI agent is a piece of software that needs to authenticate to act on behalf of a user. We need to understand your permission, the agent's permissions and ensure the only interactions user are at the intersection," said Haberkorn. "It's a paved path."

That refrain was heard in multiple presentations. Yes, there's securing AI. And there's using AI for security. But for the most part, it's all security. And specifically, it's data security.

"We're seeing a large interest in conversations and adoption around agents, We're seeing at least like 15% or so adoption of agents. So far, we see that number continuing to explode as we evolve, but our vision is to be the most trusted in performance and deploy the most trusted performance agents in the world," said Matt Saner, Senior Manager, Security Specialists at AWS. " We're working backwards from what the customers are telling us they want to use, and that's what we're working to enable for them. Everything we build is integrated and empowered by the underpinnings of our native security services."

Quint Van Deman, Senior Principal, Office of the CISO at AWS Security, said agentic AI is certainly evolving, but all the primitives you'd rely on for security are already in place. "A human is delegating to a service or agent and talking to another service with trusted identity," said Van Deman. "The details are being worked out, but building these things feels very familiar. Agents have identities."

Van Deman said AWS gives an identity to every underlying piece of compute and that could be a way forward to credential agent workflows. Current standards can also be leveraged. "This feels like a new iteration of an old problem and doesn't strike me as net new," he said.

Haberkorn did note that that AWS can do better packaging up security for agent builders "so they don't have to go looking for it."

Where security and agentic AI will become tricky is when there's a constellation of agents in multiple places. There will need to be more standards and guardrails to ensure agents can securely connect and collaborate. Model Context Protocol (MCP) will add in security standards and AWS and other vendors are working on the issue individually. These efforts will have to combine if the autonomous AI agent dream is going to play out.

Haberkorn said there's a lot of plumbing work that must happen to bring identity to cross-platform AI agents. For instance, microservices can only do what the code allows them to do. Agents are more creative and will need guardrails.

"The use cases today are just the beginning of the journey," said Haberkorn. Software development use case for agents, including Q Developer and Q Transformation will likely inform future efforts.

"Shift left"

At AWS re:Inforce 2025, the term "shift left" was mentioned dozens of times. The phrase was uttered so much I thought were in one of those "super" moments when every word ever said would have a "super" in front of it for years.

I found shift left to be annoying after a while--especially since the meaning was kind of vague beyond broad developer-speak. And since re:Inforce was in Philly I found shift left to be as undefined as "Jawn," which I still don't follow even though I'm a native.

Technically, shift left refers to a principle of integrating security, testing and quality assurance earlier in software development. Often, these practices come in at the end of the development process.

In the context of developers and security, AWS' penchant for shift left makes sense. The term has appeared in other tech keynotes and GitLab's most recent earnings call. The big question now is whether shift left becomes a cultural reference. I'm super curious to see how this phrase turns out and happy to double click on it later. See what I did there?

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

Uber AI Solutions expands, targets enterprises

Uber AI Solutions expands, targets enterprises

Uber is expanding its AI and data services unit, Uber AI Solutions, as it looks to support labs and enterprises looking to build AI models and deploy agents.

The company is offering the data and AI platform it uses internally to enterprises in a move they rhymes with what Amazon and Google do with cloud computing. Build the expertise and platform for internal use, then turn it into a business.

As previously noted, Uber is more of a data company than one focused on mobility. Uber's expertise is in collecting, labeling, testing and localizing data for its operations and then optimizing interfaces to add value. As generative and agentic AI take hold, these data services matter a lot more.

Uber's core pitch for its platform: "As we’ve scaled Uber to power more than 33 million trips across mobility and delivery every day, we have invested in innovation in product, platform, and artificial intelligence (AI) and machine learning (ML) . To enable these, we’ve created a world-class technology platform that is designed to meet our evolving requirements across data labeling, testing, and localization. We’re now making this available."

Here's what Uber AI Solutions is rolling out:

  • Global digital task platform, which connects enterprises to experts in coding, finance, law, science and linguistics. Tasks include annotation, translation and editing for multi-modal content. Think Uber gigs expanded broadly.
  • Uber data foundry, a service that provides packaged and custom datasets including audio, video, image and text to train large language models (LLMs).
  • Infrastructure for AI. Uber said it is making its platforms to manage data annotation projects and validate AI outputs available to enterprises.
  • An interface designed to "become the human intelligence layer for AI development worldwide." According to Uber, the interface will allow enterprises to describe data needs in plain language for setup, tasks, workflow optimization and quality management.
Data to Decisions Future of Work Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Accenture reshuffles exec deck as Q3 new bookings light

Accenture reshuffles exec deck as Q3 new bookings light

Accenture launched Reinvention Services, a business unit that will bring together its AI assets in one integrated unit. Manish Sharma, Accenture's CEO of the Americas, will become chief services officer.

The launch of a new services unit comes after the company reported better-than-expected fiscal third quarter earnings but showed a decline in bookings.

In addition to Sharma's role change, Accenture said John Walsh, chief operating officer, will become CEOs of the Americas. Kate Hogan, current chief operating officer of the Americas, will take on that same role for all of Accenture. Karthik Narain, Group Chief Executive and Chief Technology Officer, is leaving to "pursue other opportunities."

Accenture CEO Julie Sweet said Reinvention Services will be able to move faster, deliver AI-enabled assets and platforms and embed data and AI in services delivery. The new organizational structure launches Sept. 1.

Other executives in the Accenture reshuffle include:

  • Jason Dess, current lead of CFO and enterprise value, will become group chief executive of consulting. Dess succeeds Jack Azagury, who is leaving Accenture.
  • Song will be led by Ndidi Oteh, who is the lead exec for Song in the Americas.
  • Rajendra Prasad, currently Accenture’s chief information and asset engineering officer, will succeed Narain. Accenture's Karthik Narain on human, AI collaboration, trust
  • Kate Clifford, currently chief HR officer of the Americas, will become global chief leadership and HR officer and succeed Angela Beatty, who is also leaving the company.

Accenture reported third quarter earnings of $3.49 a share on revenue of $17.7 billion, up 8% from a year ago. Generative AI new bookings were $1.5 billion. However, new bookings of $19.7 billion were down 6% from a year ago. In the second quarter, Accenture noted customers were becoming more cautious about projects.

As for the outlook, Accenture said it now expects fiscal 2025 revenue growth of 6% to 7% with earnings of $12.77 a share to $12.89 a share. Fourth quarter revenue will be between $17 billion and $17.6 billion.

Sweet said Accenture had 30 clients in the quarter with bookings topping $100 million. Accenture saw solid growth across its core industries with financial services revenue up 13%.

On a conference call with analysts, Sweet said:

  • "We continue to see a significantly elevated level of uncertainty in the global economic and geopolitical environment as compared to calendar year 2024. In every boardroom and every industry, our clients are not facing a single challenge. They are facing everything at once, economic volatility, geopolitical complexity, major shifts in customer behavior."
  • "We have leaders who leave Accenture and pursue other opportunities. Our leaders are in demand, as you might imagine. And we have a deep bench of leaders."
  • "The GenAI demand continues to be very, very strong. And now it's getting big enough that it's going to fluctuate a little bit. But you'll see GenAI is just being more and more embedded into everything we do."
New C-Suite Data to Decisions Next-Generation Customer Experience accenture Chief Information Officer

Microsoft advances quantum computing error correction, sees on-premise traction

Microsoft advances quantum computing error correction, sees on-premise traction

Microsoft said it has developed quantum computing error-correction codes that can create a 1,000-fold reduction in error rates. The company also said it is landing on-premise interest for its Microsoft Quantum compute platform, a collaboration between Microsoft and Atom Computing.

The company said its four-dimensional geometric codes require few physical qubits per logical qubit and can check for errors in a single shot. Error correction is a huge topic in quantum computing and companies are using physical qubits with high fidelities and applying error correction codes to solve problems.

With Atom Computing, Microsoft created and entangled 24 reliable logical qubits. Microsoft used its qubit-virtualization system combined with Atom Computing's neutral atoms. Matt Zanner, Senior Director of Microsoft Quantum, said Atom Computing's neutral atom approach means it can adjust to error correction advances quickly.

Microsoft said that its family of 4D geometric codes are suitable for qubits with neutral atoms, ion traps and photonics. These 4D geometric codes require fewer physical qubits to make each logical qubit, have fast clock speeds and improve the performance of quantum hardware.

The error-correction codes, available in Microsoft Quantum compute platform, will enable the system to deliver 50 logical qubits in the near term and scale to thousands later.

According to Microsoft, its Microsoft Quantum compute platform will include error correction, cloud high performance computing, AI models and the company's science platform, Microsoft Discovery. The system has hardware, software and access to experts to refine quantum computing use cases.

Constellation ShortList™ Quantum Computing Platforms | Quantum Computing Software Platforms | Quantum Full Stack Players

For its part, Atom Computing is offering the hardware in the Microsoft Quantum compute platform. Atom Computing's approach can scale and work in tight spaces. Zanner also said that error correction codes will be tuned to Atom Computing's hardware.

Zanner said the Microsoft Quantum compute platform is a full stack offering with a Copilot interface and it has been seeing interest for on-premises deployments.

"The interest in Microsoft Quantum compute platform ranges from national quantum programs such as countries or groups of countries that want to be local hubs in region," said Zanner. "Academia is also interested and it's about creating quantum jobs. We're also seeing use cases from individual companies or consortiums to align quantum computing around a specific domain."

Zanner said there is still plenty of interest in quantum computing via the cloud, but he has been surprised by the on-premises approach. "We said we were ready to do a commercial offering and we had a bunch of customer conversations to validate it. And now we're in active conversations with several customers that are interested in pursuing it commercially," said Zanner.

He added that there are perks to having an on-premises quantum computer in that you can do tours with dignitaries and advance collaboration with academics and governments. "There's quantifiable value having a physical demonstration of quantum computing," said Zander.

Recent developments:

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Microsoft Quantum Computing Chief Information Officer

What genAI, cognitive debt will mean for enterprises and future workforce

What genAI, cognitive debt will mean for enterprises and future workforce

Generative AI has been seen as a boon for productivity, but it may not be making the workforce any smarter. In fact, enterprises may want to start thinking about cognitive debt from AI usage and a thin bench of critical thinkers.

A study (abstract) from a team at MIT looked at 54 participants using OpenAI's ChatGPT for essays. The participants were divided into brain-only users, search engine users and large language model (LLM) users. The study then used electroencephalography (EEG) to assess cognitive load during essay writing and scored the essays.

The punchline:

"Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning."

Apply this to the workforce and there are multiple threads to ponder:

  • This study was focused on students and those folks will become your managers and executives in the future. If you hollow out critical thinking with AI then you'll have a bunch of know-nothings in the future. You may be trading productivity today for dumbasses in the future. 
  • Executives are telling employees to get on the AI bandwagon and leverage new ways to work. What happens if you introduce cognitive debt to employees with strong critical thinking and institutional knowledge.
  • Generative AI (and the AI agents that will follow) is going to hollow out the bench of employees. It's already a tough hiring season for university graduates as AI eliminates entry level jobs. How will those employees develop in the future?
  • Tests used for hiring should be AI free given the ease of spinning up minimal viable products, essays and code.
  • If you're a worker, know how to leverage AI but don't lean on it too much. Using tools is a balancing act. Think about GPS, which has led to a generation (maybe two generations) that can't read a map. Reading a map old school is still a good brain workout. You may have to go out of your way to exercise your brain just like you do for muscles when you go to a gym.
  • Keep context in mind. AI is no different than smartphones or any other technology. You'll have folks on one side saying the end of society is here. And you'll have optimists telling you a new technology will solve all of your problems. The truth is in the middle.
Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth New C-Suite Tech Optimization Chief People Officer Chief Information Officer Chief Experience Officer

Data Lakes, AI Agents, and Enterprise Transformation | CRTV Episode 107

Data Lakes, AI Agents, and Enterprise Transformation | CRTV Episode 107

In the latest episode of ContellationTV, co-host analysts Holger Mueller and Liz Miller kick off by covering #enterprise tech news. Their analysis includes 1) the #agenticAI frameworks race heating up, vendors' competition on #data integration and developer velocity, and #Oracle's capex investments signaling tech transformation.

Next, Liz sits down with Pegasystems' product marketing leader Tara DeZao, for a CR #CX Convo at PegaWorld 2025. A few key takeaways from their convo include:

- Marketers learning to partner with #AI, not fear it
- AI as a collaborative tool for content creation
- Focusing on customer journey optimization
- Breaking down organizational silos through intelligent workflows

Finally, Holger interviews Miran Badzak and Edward Calvesbert from IBM about the launch of Watsonx.data, a hybrid lakehouse supporting structured and unstructured data. They share how IBM's Db2 introduces vector embedding and similarity search capabilities. Other topics include:

- AI-powered database management tools
- #Quantum computing roadmap taking shape

Watch the full episode to learn about the future of enterprise #technology! 
__

00:00 - Introduction
00:45 - Enterprise Tech News
17:17 - CX Convo with Tara DeZao, Pegasystems
28:42 - Interview with Miran Badzak and Edward Calvesbert

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/MbigkpWoPzI?si=AyZUx_yF8m8hqQD2" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

OpenAI vs. Microsoft: Why a breakup could be good

OpenAI vs. Microsoft: Why a breakup could be good

OpenAI and Microsoft are going to be rivals as the companies increasingly appear to be tilting away from the frenemy and partner model that has paid off so well for both companies.

A year ago, we riffed on the growing debate about whether OpenAI and Microsoft were symbiotic or becoming frenemies. Based recent news, it appears the two companies may be blowing right past frenemies to become rivals.

Here's a recap of what has transpired in recent days:

  • The Financial Times reported that Microsoft was prepared to walk away from OpenAI talks if they can’t agree on critical issues. Microsoft has access to OpenAI’s technology until 2030.
  • The Wall Street Journal reported that OpenAI and Microsoft tensions are boiling over. OpenAI wants to lesson Microsoft's distribution power over its AI portfolio and get buy-in on a plan to convert from a non-profit and go public.
  • OpenAI CEO Sam Altman is chasing superintelligence--much to Mark Zuckerberg and Meta's chagrin--and needs more compute. Reuters reported that OpenAI was even in talks with Google Cloud for capacity. OpenAI already leverages Oracle Cloud Infrastructure and Microsoft Azure, which used to exclusively provide infrastructure to the LLM giant.
  • OpenAI's enterprise business is surging as companies buy direct for LLMs and AI agents, said Altman at Snowflake Summit 2025. OpenAI recently landed a deal with Mattel and has launched OpenAI for Government. Those two efforts were just the latest in a long line of enterprise deals with Lowe's, Booking.com and Wayfair.
  • The body language between a video interview between Microsoft CEO Satya Nadella and Altman at Build was uncomfortable. For its part, Microsoft has been developing its own models to lessen its dependence on OpenAI. It wouldn't be the least bit surprising to see Copilot get a model transplant.

Individually, these headlines don't necessarily mean that OpenAI and Microsoft are veering to a messy divorce. And even if the breakup is messy both companies have raked in dough and will pocket billions of dollars. The two companies are the best technology partnership ever.

A few thoughts:

  • In the long run, Microsoft diversifying its models available on its platform is critical. Microsoft Azure AI Foundry has more than 1,900 models, but is still associated with OpenAI.
  • From the OpenAI perspective, a glidepath away from Microsoft makes sense. The companies will compete for enterprises, agentic AI dominance and industry services. And lesser businesses such as search will feature OpenAI vs. Microsoft too.
  • Enterprise buyers will benefit from a breakup too. I use both OpenAI ChatGPT and Microsoft Copilot (as well as Grok, Google Gemini and Anthropic Claude) and the Microsoft-OpenAI partnership remind me of Samsung and Android. The former puts layers on top of the original and gums up the experience.

Holger Mueller, an analyst at Constellation Research, said:

"Nothing lasts forever, and that applies as well to the special relationship between OpenAI and Microsoft. Apparently, Microsoft doesn't want to spend the capital to run OpenAI exclusively. Sam Altman has been looking for alternative sources to pay for the capacity needed for open AI to run its ever more hungry models. And it looks like Oracle is going to get a chunk of that business. In the meantime, Microsoft is betting on its new in-house chip  architecture. Time will tell if this was a premature breakup or not."

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity openai Microsoft ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

DataBricks Data & AI Summit Key Takeaways

DataBricks Data & AI Summit Key Takeaways

Constellation Analysts Holger Mueller and Michael Ni share insights, predictions, and more from Databricks' recent Data & AI Summit.

Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Tech Optimization Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Product Officer On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/YFv0JwUnyoo?si=ropMhMHZI450Q5Uu" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

AWS re:Inforce 2025: Takeaways from the Amazon, AWS CISOs

AWS re:Inforce 2025: Takeaways from the Amazon, AWS CISOs

Amazon Web Services is using its Nova models for tailored use cases including cybersecurity. Other takeaways from a chat with Amazon and AWS chief information security offers included the combination of physical and cybersecurity and how humans and AI code differently.

AWS recently launched Amazon Nova Premier, its most capable LLM. AWS launched Nova models last year and has been courting developers.

Eric Brandwine, VP and Distinguished Engineer at Amazon, said:

"We are very proud of the work that we've done with Nova, and we are absolutely using it internally. One of the things that we can do because we have this AI organization, is fine tune the model for different use cases, and so we've been able to come up with Nova variants that are tuned to specific security workloads, and that has shown significant dividends."

At AWS re:Inforce 2025 during an analyst Q&A, Brandwine was speaking on a panel with AWS CISO Amy Herzog and Amazon CISO CJ Moses. At Amazon, security chiefs rotate among units. For instance, Moses and Herzog swapped roles.

Herzog said Nova is an example of Amazon building tools and building blocks. Models are no different. "Choice is so deeply ingrained that it might not be top of mind to talk about one versus the other," said Herzog. "You have a job and there are a bunch of different models that you could choose from for that job. You pick the best one."

Other topics:

Physical security. Moses said physical security falls under him as CISO. "We did it for the reasons of making sure that we have the best visibility across all of those areas," said Moses. "A piece of information about a workplace incident will become the information that we need to stack on to other things to determine where we have a scrambled employee that potentially could become an insider."

Areas of non-obvious data connections that may prove out include cybersecurity and freight intelligence from an incident in a building. "We actually use the data, because the worst thing you can do is have intelligence and not actually act on it," said Moses. "And the whole idea for us is to make sure we're not siloed with that data, and secondarily, that we're able to act on it."

AWS is AWS' largest security customer. Security is required just to run a cloud. "The amount that you invest in security to secure an online retailer is very different from what you invest to secure a cloud. And so we've got all of these smart, clever people. They're operating with different constraints. They have different creative ideas, and we get to go reap them all and apply them across the company," said Moses.

Security's different lens. Herzog said security is a prerequisite and you can't get carried away with new technologies that may hurt your cybersecurity posture.

Herzog said:

"If developer productivity goes up by this amount and we need to keep pace with it, what does that without lowering the security bar? What does that look like? What ideas do you have? Recognize the changes that are happening, but then really keep the outcomes that we want to achieve--protecting our customers at speed and at scale."

Solving problems never ends. Brandwine said that internally AWS talks about the security ratchet. "It always gets tighter," he said. "It's a travesty to spend time solving a problem we've already solved before, or relearning an old lesson. So we have this deep investment in automation, automated reasoning, in using existing techniques and new techniques. We reason about our services. We say this will always be true, and then we make the machine make that always true, so we can spend our time on the new things. When you solve a problem, you're not free. You just go work on the next problem."

Why AWS and Amazon don't talk about security more in public. "If you're bringing things up to customers that they can't act on or do anything about directly themselves, you're essentially fear mongering," said Moses. "We don't believe on unnecessarily worrying our customers, especially when those things are things that are within our control. The industry itself does a good enough job on (fearmongering) that we don't need to add to the flames. We'd rather be the ones that are putting the flames out."

AI security is just security. Herzog said, "you can't just separate genAI from the rest of the conversation." "The playbook is the same as always. What are you trying to accomplish?," said Herzog. "There are definitely technical challenges that we are starting to get ahead where we might be in a few years. But I think that's a different conversation."

Brandwine said:

"There are absolutely interesting novel attacks against LLMs, and some of these have been applied to commercially deployed services. But the vast majority of LLM problems that have been reported are just traditional security problems with LLM products. You've got to get the fundamentals right. You've got to pay attention to traditional deterministic security."

Secure code and AI vs. human. Brandwine said Amazon has multiple checks on AI-generated code. One thing to watch is AI and humans write code differently. "We're getting significant success internally, but what we're finding is that the way that the human would write the code is not necessarily the way that the model would write the code. And if you want the model to evolve the code, you might want to structure it a bit differently," said Brandwine.

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer