Results

AWS CEO Garman on hardware ambitions, Trainium demand, AI and jobs

AWS CEO Garman on hardware ambitions, Trainium demand, AI and jobs

AWS CEO Matt Garman said Trainium 3 early demand is strong, the company's hardware ambitions revolve around providing cloud services and enterprises are seeing strong returns amid AI bubble talk.

Here's a look at what Garman said at an ask me anything meeting with analysts.

Hardware ambitions. Garman said:

"Our focus is, is about cloud services and hardware in support of that. And so we don't sell hardware. We're not don't necessarily have plans to, although I wouldn't rule it out ever in the future, but it's not currently what we focus on, we're very focused on building the world's best infrastructure for customers to run on, and what we sell, the services and AI factories is no different than that."

Garman added that Amazon is obviously building hardware in Graviton and Trainium custom silicon, but that's in support of services.

SaaS-y efforts. Garman was asked about the success of Amazon Connect and AWS' recent moves to compile services into more of a suite. Garman said:

"When we launched Connect, nothing like that existed, and we thought that we could do a better job. We had a lot of learnings from internal use and I think that's resonated with customers. That's why it's a billion dollar plus business and there will be others like that."

Garman said software development is another space where AWS can offer applications and cited Kiro. Healthcare is another possibility. He added:

"We don't have a concerted plan around SaaS, and we wouldn't go into it just because we want to go into it. And I think it's more there's an area where we think we have a differentiated idea that we can offer some interesting value to customers. We would always consider it. But it's more around that for us, we love leaning into our partners."

More from re:Invent 2025

Useful life of AI infrastructure. Amazon is on a 5-year depreciation, nothing others are pushing 6 years. "For AI infrastructure, we do five years because we think that there may be shorter life there," said Garman. "We have 20 years on our core infrastructure to know roughly how long CPUs last, drives last, network gear, data centers, etc.," said Garman.

Garman said AI infrastructure may be different since it's evolving so fast. As AI infrastructure moves from training to inference it's unclear what the useful life will be. "We're actually the same training infrastructure. And the benefits that go into that, whether it's larger models or better bandwidth or other things like that, actually benefit inference as well," said Garman.

Using multiple models, including smaller ones, will also impact the useful life of AI infrastructure. "I think that we're kind of trying to figure it out. I think as you think about a mixture of models, you actually are going to be able to send models to the right size of infrastructure to run it, ultimately, and take advantage of that," he said.

Building AI agents. Garman said AWS is focusing on offering building blocks as well as applications like AgentCore. Large enterprises want building blocks to build agents. Smaller firms will look for a complete package. "AWS has always been giving small customers the capabilities that only the largest companies used to have," said Garman.

AI bubble? It depends. "I don't expect capex to slow down. We'll keep spending and we'll keep growing. It's a capital intensive business and always has been."

He said if you're a VC funding a zero revenue business we may be in an AI bundle. Garman noted on a CIO panel multiple executives were seeing significantly positive ROI. "I've never met an enterprise that was seeing really good positive ROI investments just decide not to do it," said Garman. "That's my signal of how things are going currently. The industry is still supply constrained by something. Chip capacity, power capacity, laser capacity and things like that."

Developers. Garman was asked if AWS was refocusing on the developer and he said the company is always focused on developers.

Garman said:

"It's always been important. But the focus where we are in the world right now is how much developers are driving some of that innovation. It's an area where I think we can add a ton of value. It's a customer segment that's incredibly important for me and the team."

"We can bring a lot of differentiated value. We think that we can turbocharge what developers can do."

AI and jobs. Garman said, "I don't think AI is replacing jobs, but it is changing them."

Garman added that training will be critical. "We want them to understand how to use AI tools. We want them to figure out how they use AI to code. We want them to figure out how they use AI in their jobs," said Garman. "And because those rules are going to change, we'll continue to iterate on our trainings as well."

AI workloads. Garman was asked about whether AWS was getting AI workloads. he said the multi-model approach has paid off. "Most of our customers are building their AI production applications on AWS. They want them integrated where their applications are. They want them integrated where their data is. They want a choice of models. They want actually a platform to build inference that has enterprise controls that gives them the best price performance they're seeing price performance gains," said Garman, who said AWS will embrace multiple models whether they come from Google, OpenAI or anyone else. "I think we have a really differentiated story for customers on how they customize AI for them."

Multi-cloud. Garman said AI workloads will be inherently multi-cloud. The key will be to offer observability across all of the clouds as well as security and network connectivity.

Trainium. Garman has said Trainium 2 was oversubscribed. Trainium 3 is underway, but just became generally available. "I expect to sell those as fast as we land them as well," he said. "The response to Trainium 3 has been much stronger than Trainium 2."

When asked about AWS custom silicon vs. others, Garman said the company is buying plenty of Nvidia and AMD chips. AWS will follow demand.

On Trainium 4, Garman said AWS' custom silicon will link up with Nvidia's NVLink Fusion and others.

Quantum computing. Garman said:

"I think quantum was going to be a super powerful technology. It's a big investment area for us. Our lab is making some and there's much attacks, and who knows me right that way? There's much different paths on quantum. I like the way that we're going around the error correction. And the team has made some really big advancements over the last year.

But no one has made a useful quantum computer yet. People who should dig into it right now are researchers. There's not really good business reason right now."

Leo. Garman said Amazon was bullish on Leo and satellite internet services. "I think Leo is going to unlock a number of new use cases. I think there's a big consumer business as well as a big business opportunity. There are a lot of companies that would love getting a gigabit line in lightly connected areas or out in the field," said Garman.

Robotics and physical AI. Garman said he was excited about physical AI models and robotics, but noted that the models aren't ready just yet. "I think physical AI and agents are going to play a big role and be hugely transformative, but there just hasn't been a prevalence of data," said Garman.

He also noted that it's unclear whether startups sell the brain of the robot or the robot. The market is in its infancy--even though Amazon is one of the largest buyers of robots. "It's early but it's an area that I'm excited about," said Garman.

Data to Decisions AWS reInvent aws Chief Information Officer

AWS adds doubles down on customizing, fine tuning AI models, agents

AWS adds doubles down on customizing, fine tuning AI models, agents

Dr. Swami Sivasubramanian, Vice President of Agentic AI, made the case that AWS' suite of AI tools is best suited for wrangling AI agents and customizing models to deliver business outcomes.

Speaking at his AWS re:Invent 2025 keynote, Sivasubramanian said:

"The question isn't whether you should customize your models, but how quickly can you get started?"

The future to Sivasubramanian is custom quality models that can carry out enterprise-specific tasks efficiently. "As agents become easier to build, the next big question emerges, how do we make them more efficient? Today's off the shelf models have broad intelligence. They can handle complex to use, multi-step reasoning and unexpected situation, but they aren't always the most efficient," said Sivasubramanian. "And this efficiency is not just about cost. It's about latency. How quickly can your agent respond? It's about scale. Can it handle quick demand? It's about agility. Can you iterate and improve quickly?"

Sivasubramanian said the barrage of announcements from re:Invent 2025 were about removing complexity and costs for model customization without an army of PhDs. Sivasubramanian followed up on earlier re:Invent announcements revolving around Amazon AgentCore, AWS Marketplace and multiple other products.

More from re:Invent 2025

Here's a look at the news items from Sivasubramanian's keynote:

  • Amazon Bedrock is getting new model customization tools that features reinforcement fine-tuning models that can deliver accuracy gains of 66% over base models. Amazon Bedrock automates the reinforcement fine tuning workflows without needing machine learning expertise. Amazon Nova is the first model offered and with other models coming soon.
  • Amazon SageMaker AI is gaining serverless customization for multiple AI models including Amazon Nova, DeepSeek, GPT-OSS, Llama and Qwen. Amazon SageMaker AI will support reinforcement learning via a simple interface. Models can be customized in days instead of months.

Data to Decisions Future of Work AWS reInvent aws Chief Information Officer

Be Bold or Be Replaced: AI Agents, Human Courage & the New Enterprise Reality | DisrupTV Ep. 419

Be Bold or Be Replaced: AI Agents, Human Courage & the New Enterprise Reality | DisrupTV Ep. 419

Be Bold or Be Replaced: AI Agents, Human Courage & the New Enterprise Reality | DisrupTV Ep. 419

This week on DisrupTV, hosts Vala Ashar and R "Ray" Wang sat down with two leaders shaping the future of enterprise AI and leadership: Marty Kihn, SVP of Strategy at Salesforce and author of Agent Force, and Ranjay Gulati, author of How to Be Bold: The Surprising Science of Everyday Courage.

From the rise of agentic AI inside the enterprise to the science behind building courageous organizations, this episode delivered a powerful look at the mindset and technology needed to lead through massive transformation.

Inside Salesforce’s Agent Force: AI Agents at Enterprise Scale

Kihn kicked off the discussion with an inside look at Agent Force, Salesforce’s platform designed to help organizations create, test, deploy, and monitor AI agents safely and at scale.

He emphasized that building enterprise-grade AI requires more than great models—it demands:

  • Safety and trust: personal data masking, toxicity detection, and bias mitigation
  • Data grounding: keeping agents connected to accurate, real-time business context
  • Narrowing scope: focusing agents on specific, well-defined tasks instead of vague ambitions
  • Extensive testing: because LLM-driven agents can be non-deterministic and unpredictable

Kihn framed the future as a hybrid human–agent workforce, where AI augments employees, automates workflows, and gives leaders clearer articulation of what tasks need attention.

He also highlighted the three A’s of the agentic enterprise:

  • Automation — Streamlining repetitive work
  • Augmentation — Boosting human capability
  • Articulation — Making business processes and decisions explicit, observable, and improvable

As organizations move toward this future, Kihn argued, they must invest in protocols and standards that ensure AI agents can communicate with each other and with core enterprise systems. He pointed to emerging frameworks like:

  • Model Context Protocol (MCP)
  • Agent-to-Agent Protocol (ATA)
  • Advertising Context Protocol (AdCP)

These will shape the future of orchestration across the agentic enterprise.

Building Courage in an AI-Transformed World with Ron J. Galati

If agentic AI is the technology of the future, courage is the human capability that will define who thrives in it. Ranjay Gulati joined the show to discuss the science behind bold leadership, inspired by his new book and informed by receiving a forward from none other than the Dalai Lama.

Gulati’s message was clear:

  • Courage isn’t the absence of fear—it’s action despite it.

He explored how leaders can cultivate courage at scale inside their organizations by focusing on:

  • Self-efficacy — the belief that you can succeed
  • Deliberate action — small moves that build momentum
  • Purpose — a deeper mission that fuels resilience
  • Generosity — creating a “clan” mindset where people support one another

Gulati argued that courage will be essential as companies navigate accelerating advancements like autonomous vehicles, AI-driven automation, and massive shifts in job roles.

  • “Fear is real,” he said. “But courage emerges when people feel capable, connected, and committed to something bigger than themselves.”

Why Courage and AI Must Evolve Together

A unifying theme from both guests:

  • The future belongs to organizations that blend bold leadership with trustworthy AI.
  • AI will automate and augment work, but humans must bring purpose and judgment.
  • Agents will take over complex workflows, but leaders must set the vision and build trust.
  • Uncertainty will grow—but so will opportunity for those willing to act boldly.

As Ray and Vala wrapped the episode, they emphasized that the path forward requires both technical excellence and human bravery.

Key Takeaways

  • Agent Force is Salesforce’s platform for building and managing enterprise AI agents.
  • Enterprises must focus on safety, data grounding, and narrow scope when deploying AI.
  • Emerging protocols like MCP and ATA will standardize how agents operate at scale.
  • Courage is a learnable skill, driven by purpose, generosity, and action.
  • The future of work will be shaped by hybrid human–AI teams, requiring both innovation and bold leadership.

Final Thoughts: Innovation Starts Within

As enterprises step into the agentic era, one truth is becoming increasingly clear: technology alone won’t determine who succeeds—courage will. AI agents like those built on Salesforce’s Agent Force will accelerate work, reshape roles, and unlock new forms of value. But it’s bold, purpose-driven leaders who will guide teams through uncertainty, build trust in these new systems, and inspire a culture willing to experiment, learn, and evolve.

This episode was a reminder that the organizations ready to blend AI innovation with human bravery will be the ones that define the next decade. Whether you're exploring agentic architectures or building a more courageous workforce, now is the time to lean in.

If you're navigating AI transformation—or preparing your teams for the future—this episode is a must-watch.

Related Episodes

If you found Episode 419 valuable, here are a few others that align in theme or extend similar conversations:

 

New C-Suite Future of Work Tech Optimization Chief Executive Officer Chief People Officer Chief Information Officer Chief Data Officer Chief Technology Officer On DisrupTV <iframe width="560" height="315" src="https://www.youtube.com/embed/fDprrtzQccc?si=oViXjXdwp0Svlam6" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

CrowdStrike reports strong Q3, 22% revenue growth

CrowdStrike reports strong Q3, 22% revenue growth

CrowdStrike reported strong third quarter results as the company continued to land security wallet share with revenue growth of 22%.

The cybersecurity company reported a third quarter net loss of $34 million, or 14 cents a share, on revenue of $1.23 billion, up 22% from a year ago. Non-GAAP earnings were 96 cents a share.

Wall Street was expecting CrowdStrike to report non-GAAP earnings of 94 cents a share on revenue of $1.22 billion.

CrowdStrike CEO George Kurtz said the third quarter "was one of our best quarter in company history" as net new annual recurring revenue was up 73% from a year ago. CrowdStrike is dueling with Palo Alto Networks to convince enterprises to consolidate cybersecurity platforms.

Burt Podbere, CrowdStrike's CFO, said AI related demand was strong as customers consumed more of the company's Falcon platform and Flex subscription plans. The company said that second half fiscal 2026 net new annual recurring revenue will remain north of 50%.

For the fourth quarter, CrowdStrike projected revenue of $1.29 billion to $1.3 billion with non-GAAP per share earnings of $1.09 to $1.11. For fiscal 2026, CrowdStrike projected fiscal revenue of $4.797 billion to $4.807 billion. Non-GAAP per share earnings for the fiscal year will be $3.70 a share to $3.72.

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

AWS' Kiro launches autonomous agents for individual developers

AWS' Kiro launches autonomous agents for individual developers

Amazon's Kiro, Amazon Web Services' next-gen AI developer platform, is launching in preview for individual users with an autonomous agent for each developer, integration with GitHub and issue assignments.

Kiro, which recently became generally available, has been expanding tools and features on a two-week cadence since being introduced in July as an agentic AI integrated development environment. Amazon is standardizing its developers on Kiro. 

Speaking during a re:Invent 2025 keynote, CEO Matt Garman said:

"We really see the potential for the entire developer experience, and frankly, the way that software is built to be completely reimagined. We're taking what's exciting about AI and software development and adding structure to it. This is why we launched Kiro, the agentic development environment for structured AI coding,"

More from re:Invent 2025

With the individual accounts and an autonomous agent that learns how you develop, Kiro is moving closer to the goal of moving developers from prompt to prototype faster. Key points about the Kiro autonomous agent, announced at AWS re:Invent 2025 include:

  • The agent is designed for persistent operation beyond individual coding sessions.
  • This AI-powered companion can manage work across multiple repositories and tools like GitHub and Jira.
  • Kiro’s autonomous agent can research an implementation approach to a new feature for an existing code base.
  • It learns from user preferences, orchestrates sub-agents for specialized tasks, and maintains context throughout projects. Early use cases range from bug triage and multi-repo refactoring to maintenance campaigns—tasks often too routine or time-consuming for human developers.

AWS's Kiro autonomous agent is part of a broader plan to offer a unified software development platform that features one interface covering everything from planning to deployment. Nikhil Swaminathan, Kiro’s product lead, said the autonomous agent will be able to spin up sub-agents and complete tasks and judge quality to make the move to production.

Garman said the autonomous agent in Kiro is part of what AWS calls a frontier agent. Frontier agents can carry out longer projects autonomously by operating in the background.

The idea is that the autonomous agent in Kiro will move from tasks to being able to give feedback to a developer. “Just being able to have feedback come through and making it very personalized will be a win,” said Swaminathan. “We’re launching with one agent per developer and will have a private beta. We’ll be expanding more from there.”

"This is AWS first foray into the autonomous AI world when it comes to its largest user population: developers. Good to see the focus, it is likely to increase developer velocity. Leaders of developers and CxOs are now waiting (desperately) for a SDLC focussed autonomous AI offering from AWS," said Holger Mueller, a Constellation Research analyst. 

Ultimately, Kiro autonomous agents for teams will launch.

Kiro is also getting Powers, which is an extension that gives developers extensions to quickly augment a Kiro agent for workflows including design integration, hosting and data handling. Powers gives agents only the tools and context they need. Powers is being launched with partners such as Figma and Netlify.

“The concept of a power is to give developers the ability to extend the core agent. Often what happens with frameworks and tools and technologies is that people keep shipping improvements and the model itself is not trained to understand, there's a lot of trial and error with figuring out how to configure the agent rules in the right way to get the output that you need,” said Swaminathan.

These re:Invent 2025 announcements go with new Kiro features announced in November to go with general availability. Some of those additions include:

  • Kiro has added Anthropic's Opus 4.5 model in Kiro.
  • The new version of the Kiro IDE can measure whether code is up to specifications with property based testing. Kiro will go into a project's specs, extract properties that indicate how a system should work and then test against them.
  • The Kiro agent is available in your terminal. Developers can use the command line interface (CLI) to build features, automate workflows, analyze errors and trace bugs in multiple terminals. Kiro CLI works with the same steering files and Model Context Protocol (MCP) settings that are in the Kiro IDE.
  • Kiro Teams. Kiro is available for teams via the AWS IAM Identity Center with support for other identity providers on deck.
  • A startup program for Kiro Pro+. Startups that have raised funds up to Series B can apply for AWS credits for Kiro until Dec. 31.
Future of Work Next-Generation Customer Experience AWS reInvent aws Chief Information Officer

AWS adds AI agent policy, evaluation tools to Amazon Bedrock AgentCore

AWS adds AI agent policy, evaluation tools to Amazon Bedrock AgentCore

Amazon Web Services is adding AI agent policy and evaluation features to Amazon Bedrock AgentCore in a move that aims to solve the big issues that keep enterprises from moving from proof of concept to production.

At AWS re:Invent 2025, the cloud giant updated Amazon Bedrock AgentCore, which recently became generally available.

Amazon Bedrock AgentCore sits in the tools and infrastructure layer for building AI agents and applications along with Amazon Bedrock, Amazon Nova and Strands Agents. That AI building infrastructure also includes Amazon SageMaker and compute including AWS Trainium and Inferentia.

More from re:Invent 2025

Customers using Amazon Bedrock AgentCore include PGA Tour, which deployed an automated content system for player coverage, Salesforce's Heroku, which built an app development agent called Heroku Vibes, and Grupo Elfa, which deployed three agents for price quote processing. AWS CEO Matt Garman also noted Nasdaq and Bristol Myers Squibb as AgentCore cusotmers. 

Garman said the addition of policy and evaluation can free up innovation. "Most customers feel that they're blocked from being able to deploy agents to their most valuable, critical use cases today," said Garman.

Mark Roy, Tech Lead, Agentic AI at AWS, said CIOs are racing to deploy AI agents, but want insurance in the form of governance and evaluations to scale them.

Here's a look at the policy and evaluation additions to Amazon Bedrock AgentCore.

Policy in Amazon Bedrock AgentCore is designed to ensure AI agents stay within defined boundaries without slowing them down. The policy system is integrated with AgentCore Gateway to intercept every call before execution.

Amazon Bedrock AgentCore Policy does the following:

  • Gives you control over what agents can access, what actions are performed and under what conditions.
  • Processes thousands of requests per second while maintaining operational speed.
  • Create policies using natural language and aligns with audit rules without custom code.
  • Define clear policies once and apply them across the enterprise.

"You need to have visibility into each step of the agent action, and also stop unsafe actions before they happen," explained Vivek Singh, AgentCore Senior Product Manager at AWS. "This includes robust observability, so if something goes wrong, you can pinpoint exactly into what steps the agent took and how the agent came to that conclusion. You also need the ability to set some of your business policies in real time."

Amazon Bedrock AgentCore Evaluations adds a set of 13 built-in evaluators to assess AI agent behavior for correctness, helpfulness and safety. The evaluators enable developers to deploy reliable agents with real-time quality monitoring and automated risk assessment.

Enterprises can also create custom evaluators for quality assessments using preferred prompts and models. The evaluations are also integrated into AgentCore Observability via Amazon CloudWatch for unified monitoring.

According to AWS, AgentCore Evaluations monitors real-world behavior of AI agents in production. And LLM is used to judge responses for each metric and then write explanations. Evaluations are on-demand so developers can validate AI agents before production and then ensure smooth upgrades.

Amazon Bedrock AgentCore Evaluations is available in preview.

With the addition of Episodic Functionality to AgentCore Memory, AgentCore can enable agents to learn from successes and failures, adapt and build knowledge over time.

Along with the AgentCore updates, AWS also built out Strands Agents, an open-source python software development kit announced in May. Strands Agents is aimed at building agents with a few lines of code with native integration with Model Context Protocol servers and AWS services. It's designed for rapid development.

AWS announced the following for Strands Agents:

  • Strands Agents SDK for TypeScript so developers can choose between TypeScript or Python and run in client applications.
  • Strands Agents SDK for edge devices so developers can run agents using local models.
  • AWS also said it is experimenting with steering tools in Strands Agents to make them more context aware without front loading all agent instructions into a single prompt. The idea is to use steering handlers to make agents more flexible while reducing token costs.
  • Strands Agents Evaluations, an evaluation framework to tests agent quality, interactions and goal completion.

Constellation Research analyst Holger Mueller said:

"AWS is continuing its systematic build out of AgentCore with the new capabilities announced today. And that is key for CxOs because for advanced AI adopters in 2026 it is going to be the battle of the AI frameworks. Who will enable their enterprise to build AI powered Next Generations Applications that help automate and lower costs?"
 

 

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AWS reInvent aws ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS launches Amazon Nova Forge, Nova 2 Omni

AWS launches Amazon Nova Forge, Nova 2 Omni

Amazon Web Services launched Amazon Nova Forge, a service that gives enterprises the ability to train their own foundational models.

With Amazon Nova Forge, AWS is looking to deliver real enterprise outcomes. The reality has been that off-the-shelf foundational models are inaccurate in many enterprise use cases. When enterprises build on top of open source models, results can degrade as more data is added.

Speaking at AWS re:Invent 2025, AWS CEO Matt Garman said the company is focused on model selection as well as building out Nova. Enterprises will need multiple models as a start kit to customize with their own enterprise data.

“Your data is unique. It's what differentiates you from the competition," said Garman. "If your models have more specific knowledge about you and your data and your processes, you can do a lot more."

More from re:Invent 2025

The goal with Amazon Nova Forge is to get better results with proprietary data in a seamless way.  Garman said Nova Forge is aimed at enabling enterprises to add their unique expertise to a foundation model.  Garman added that today’s techniques can only go so far in adding expertise to models.

Amazon Nova Forge gives enterprises the ability to do the following:

  • Select a starting model and checkpoint.
  • Mix your data with Amazon curated datasets.
  • Leverage multiple checkpoints to ensure performance.
  • Deploy these custom models on Amazon Bedrock for AI agent deployments.

Garman said:

“With Nova Forge, you get exclusive access to a variety of Nova training checkpoints, and then you get the ability for you to blend in your own proprietary data together with an Amazon curated training data set at every stage of the model training, this allows you to produce a model that deeply understands your information, all without forgetting the core information the thing has been trained on. We call these resulting models novellas, and then we allow you to easily upload your novella and run it in Bedrock.”

Nova Forge is being used by Reddit as well as Sony. Sony said it was putting Nova Forge in the middle of its agentic AI architecture, which is built on AgentCore.

Nova Forge is part of a broader Amazon Nova rollout. The company, which first introduced Nova models a year ago and has updated since, launched Amazon Nova 2 including the following versions:

  • Nova 2 Lite, a fast cost effective reasoning model.
  • Nova 2 Pro, AWS most intelligent reasoning model.
  • Nova 2 Sonic, a speech-to-speech foundational model for conversational AI that's embedded into Amazon Connect.
  • Nova 2 Omni, a multimodal reasoning and image generation model.

“Over the last year, we've actually extended Nova family to support more use cases and deliver more possibilities for you that deliver real value,” said Garman, who said AWS will continue to add Nova models.

Amazon Bedrock is adding new models from Google, Nvidia, Mistral including Mistral Large 3 and Ministral 3, Alibaba's Qwen and Amazon.

“We think model choice is so critical. We've never believed that there was going to be one model to rule them all, but rather that there will be a ton of great models out there, and it's why we've continued to rapidly build upon an already wide selection of models,” said Garman. “We have open rates models and proprietary models, general purpose, specialized ones. We have really large ones and small models, and we've nearly doubled the number of models that we offer in bedrock over the last year.”

Data to Decisions Future of Work AWS reInvent aws Chief Information Officer

AWS launches AI factory service, Trainium 3 with Trainium 4 on deck

AWS launches AI factory service, Trainium 3 with Trainium 4 on deck

Amazon Web Services launched AWS AI Factories, said Trainium 3 was generally available and outlined plans for Trainium 4.

The focus on custom silicon for AWS lands as it emphasizes that is still a strong partner to Nvidia--and outlined new instances for the latest Nvidia GPUs.

During a keynote at re:Invent 2025, AWS CEO Matt Garman followed a string of big infrastructure announcements including plans to invest $50 billion for US government high performance computing and AI data centers, the launch of Project Rainier for Anthropic and a deal with OpenAI.

Garman said AI infrastructure will require new building blocks and processes to create agents. "AI assistants are starting to give way to AI agents that can perform tasks and automate on your behalf. This is where we're starting to see material business returns from your AI investments," said Garman. "I believe that the advent of the of AI agents has brought us to an inflection point in AI's trajectory. It's turning from a technical wonder into something that delivers us real value. This change is going to have as much impact on your business as the internet or the cloud."

AWS has added 3.8 gigawatts of capacity added with Trainium growth of more than 150%. AWS has increased its network backbone by 50%. “In the last year alone, we've added 3.8 gigawatts of data center capacity, more than anyone in the world. And we have the world's largest private network, which has increased 50% over the last 12 months to now be more than 9 million kilometers of terrestrial and subsea cable,” said Garman during his keynote.

More from re:Invent 2025

While the launch of Trainium 3 was telegraphed on Amazon’s earnings call, Trainium 4 was also previewed. There was also a messaging twist in that AWS noted that Trainium, which was originally launched as an AI model training chip, is also being used heavily for inference.

Garman said AWS has already deployed more than 1 million Trainium processors and is selling them as fast as they can be produced.

Among the details:

  • Trainium 3 and UltraServers will offer the best price-performance for large-scale AI training and inference. Compared to Trainium 2, AWS Trainium 3 and UltraServers will have 4.4x more compute, 3.9x higher memory bandwidth and 3.5x higher tokens/megawatts.
  • Garman said AWS has seen big performance gains by installing Trainium 3 in its UltraServers.

  • Trainium 4 will build on Trainium 3 with 6x the performance (fp4), 4x the memory bandwidth and 2x the memory capacity.

AWS' custom silicon will in part power AWS AI Factories, which also launched at re:Invent. AWS AI Factories are customer-specific AI infrastructure built, scaled and managed by AWS.

The general idea behind AWS AI Factories is that the cloud provider can take the expertise from the projects behind the Anthropic, OpenAI and Humane deals and democratize AI factories for large enterprises and the public sector.

“We're enabling customers to deploy dedicated AI infrastructure for AWS in their own data centers for exclusive use for them,” said Garman. “AWS AI factories operate like a private AWS region, letting customers leverage their own data center space and power capacity that they've already acquired. We also give them access to leading AWS AI infrastructure and services.”

Now AWS' Garman was sure to make sure Nvidia instances were handy. The pitch is that AWS is the best place to run Nvidia for reliability, uptime and availability.

AWS launched P6e instances based on Nvidia's GB200 and GB300 AI accelerators. These instances are an upgrade over P5 instances based on B200 and B300.

Data to Decisions Future of Work Tech Optimization Chief Information Officer

ServiceNow acquires Veza, will integrate into AI Control Tower

ServiceNow acquires Veza, will integrate into AI Control Tower

ServiceNow said it will acquire Veza in a move that will bring identity tools to its security and risk portfolio.

Terms of the deal weren't disclosed.

ServiceNow said in a statement that Veza specializes in identity security and enables enterprises to understand and control who and what has access to data, applications, systems and AI artifacts.

Veza's main technology is its Access graph, which maps and analyzes access relationships across human, machine and AI identities. The latter part is critical given that vendors are adding identity access technologies for AI agents.

The plan for ServiceNow is to add Veza to its AI Control Tower, which governs and orchestrates AI agents. ServiceNow will also add Veza to its security and risk portfolio including vulnerability response, incident response and integrated risk management. ServiceNow's security and risk applications have more than $1 billion in annual contract value.

Veza, founded in 2000, has more than 150 global enterprise customers.

Constellation Research analyst Holger Mueller said:

"While ServiceNow keeps declaring AI platform readiness, it keeps making key architecture decisions and Veza is no exception. While the acquisition makes sense and maybe also differentiating, CxOs should expect ripple effects across the architecture from a runtime and implementation perspective."

Data to Decisions Digital Safety, Privacy & Cybersecurity servicenow Chief Information Officer Chief Information Security Officer

MongoDB Q3 surges on Atlas demand

MongoDB Q3 surges on Atlas demand

MongoDB revenue surged in the third quarter courtesy of 30% revenue growth in its Atlas platform.

The company, which recently named CJ Desai as CEO, reported a third quarter net loss of $2 million, or 2 cents a share, on revenue of $628.3 million, up 19% from a year ago. Non-GAAP earnings were $1.32 a share.

Wall Street was expecting MongoDB to report non-GAAP earnings of 79 cents a share on revenue of $593.44 million.

MongoDB said its Atlas revenue was up 30% from a year ago. Atlas now represents 75% of revenue. The company added 2,600 customers in the third quarter and as of Oct. 31 had 62,500 total customers.

Desai said the third quarter was driven by "continued strength in Atlas" and the company "delivered meaningful margin outperformance."

As for the outlook, MongoDB raised its outlook for fiscal 2026 and the fourth quarter. For the fourth quarter, MongoDB said revenue will be between $665 million to $670 million with non-GAAP earnings of $1.44 a share to $1.48 a share.

For fiscal 2026, MongoDB is projecting revenue of $2.434 billion to $2.439 billion with non-GAAP earnings of $4.76 a share to $4.80 a share.

On a conference call, Desai said:

  • "MongoDB has the potential to become the generational modern data platform of this evolving era, an opportunity that comes once in a lifetime. I am a truly customer-obsessed leader. So during my diligence, I spoke with multiple customers. Across these conversations, the message was clear. MongoDB already powers core, mission-critical workloads were enterprises that are modernizing their technology stack. At the same time, MongoDB is uniquely positioned at the center of the AI platform shift."
  • "There is still significant room to broaden our footprint within the enterprise. A strong example of this expansion opportunity is a major global insurance provider that has adopted MongoDB broadly across its enterprise. The company selected MongoDB Atlas to modernize several mission-critical systems, including its next-generation policy administration platform, analytics rating engine, unstructured data repositories and hundreds of supporting services.Since moving its policy platform to Atlas, the insurer has expanded from just a small set of regions to nationwide and significantly accelerated the rollout of new products and distribution channels."
  • "As AI adoption accelerates, MongoDB's positioned not just to participate in the wave, but to help define it. we are already beginning to see this play out with AI-native customers."
  • "We are also seeing meaningful traction among large enterprises that are starting to build AI applications that have a material impact on their business." 
Data to Decisions mongodb Chief Information Officer