Results

CEO Shifts, Sports Innovation, AI in Customer Experience | ConstellationTV Episode 94

📺 ConstellationTV episode 94 is here! Co-hosts Holger Mueller and Liz Miller kick things off by analyzing the latest CEO moves in #tech, including #Workday's new hire Rob Ansel and #AWS's new CMO Julia White. 

Next, catch a fascinating discussion with Holger and Jonathan Becher of the San Jose Sharks about the innovative use of #AI technology in professional sports. 

Round out the episode with a CR #CX convo between Liz and Nick Delis of Five9 about organizations moving from the "year of failure" in 2023 to the "year of execution" in 2025 when it comes to AI implementation in customer experience. 

00:00 - Meet the Hosts
01:11 - Enterprise tech news
15:30 - Interview with Jonathan Becher, San Jose Sharks
33:11 - Interview with Nick Delis, Five9
58:31 - Bloopers! 

ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/HYE5gLN6kow?si=QrpAoM8DSsMz3Zxc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

CEO Shifts, Sports Innovation, AI in Customer Experience | ConstellationTV Episode 94

📺 ConstellationTV episode 94 is here! Co-hosts Holger Mueller and Liz Miller kick things off by analyzing the latest CEO moves in #tech, including #Workday's new hire Rob Ansel and #AWS's new CMO Julia White.

Next, catch a fascinating discussion with Holger and Jonathan Becher of the San Jose Sharks about the innovative use of #AI technology in professional sports.

Round out the episode with a CR #CX convo between Liz and Nick Delis of Five9 about organizations moving from the "year of failure" in 2023 to the "year of execution" in 2025 when it comes to AI implementation in customer experience.

00:00 - Meet the Hosts
01:11 - Enterprise tech news
15:30 - Interview with Jonathan Becher, San Jose Sharks
33:11 - Interview with Nick Delis, Five9
58:31 - Bloopers!

ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/HYE5gLN6kow?si=QrpAoM8DSsMz3Zxc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

AWS CEO Garman Q&A: Model choices, competition and AI's future

AWS CEO Matt Garman at re:Invent 2024 elaborated on the company's strategy to serve up foundational building blocks, Intel's future, model choices, sustainability and why storylines about Trainium competing with Nvidia are misplaced.

Most of Garman’s comments were follow-ups on the happenings at re:Invent. Here’s the news stack:

In a Q&A with analysts, Garman covered the following points:

Customers leveraging foundation models. Garman said customers will build, buy and fine tune a wide selection of models. Garman said:

"Customers are going to use a wide range of models, and they'll fine tune some of the models in Bedrock, and they will build their own. The answer is evolving. You'll find it easier to build a model from scratch with your own proprietary set of data. There’s going to be a lot of customers who are continuing to do that on SageMaker. We're seeing no amount of slowing down of customers doing that."

Nova models .AWS launched its new Nova models to replace Titan. "Nova will be replacing the Titan model as it's such a leap forward from where we were and wanted a whole new brand around them."

Intel. Garman said that manufacturing in the US is critical and he's hopeful that the company can get to being a leading foundry. Garman said:

"They're incredibly important to the country, and so I think I'm hopeful that they get to a good place. Not sure that I would like to fund them, necessarily, but I think it's super important. I'm hopeful that that Intel can get back to being a leading foundry.

"I think having all of the leading edge foundries in one location in Taiwan is probably not the best for just the global supply chain."

Power and sustainability. Garman said AI will need a lot more electricity and power, but hyperscalers will have to become much more efficient. Garman said:

"The compute is getting much more efficient. Every cycle of compute today is getting more and more efficient. I would love for the computers get as efficient as the human brain. We haven't invented that technology yet. In the meantime, we're planning for the electricity needs for the next decade that we project. It's not just us. There's much more demand for power.

It's important for us to keep pushing towards carbon zero power. And so, we continue to make really large investments in renewable energy. We're making investments in nuclear. It's just a part of the portfolio of power that we're going to need."

Garman noted that Amazon has commissioned multiple renewable energy projects.

The balance between abstraction and primitives. Garman said AWS will aim to improve compute, storage and database as well as the abstraction layer. Garman said:

"I don't think that it's either or. I think we're absolutely focused on the primitives and improving things like storage and database. We think there's tons of innovation and custom silicon and compute and networking. I think inference is a core building block. We're investing a lot in services that are more abstract and help customers be more efficient in their jobs. We'll do both of those things. And I think there's a there's enough room for innovation across all those different levels."

Cybersecurity approach. Garman was asked why AWS doesn't try to monetize cybersecurity. He said AWS spends billions of dollars on cybersecurity that's hopefully invisible to most customers. "There's a lot of great partners out there and they do a fantastic job," said Garman. "We're happy to partner with them."

Product approach. Garman was asked about AWS' approach between product teams to create different building blocks. He said that approach has driven innovation because teams aren't interdependent on each other. "If you have 50 teams that all have to be in lockstep to deliver something you're going to move slow," said Garman. "Our strategy has been to let those teams invent and move fast. I appreciate that approach introduces some complexity for customers and we're moving to cover that."

He noted that SageMaker Studio is an example of an effort to bring those building blocks together seamlessly. "SageMaker is an elegant example of making it easier to operate with a whole set of tools on common data sets," said Garman. "We can do that because we have all these core components behind the scenes. We will keep innovating."

Simply put, there won't be fewer services from AWS-like ever.

GenAI model choice. Garman said choice is important to AWS, but the company will have "to keep getting better about helping customers choose the right thing."

"We'll have to figure out new ways to help and thinking about model routing, thinking about how you do AB testing and which model is giving customers better outcomes," he said.

Garman was asked about Amazon's Anthropic investment and he noted that the companies are close partners that learn a lot from each other.

Amazon Q. Garman said that Amazon Q can have a wide reach from a more neutral position. "The real power with Q Business is this Q Index that can index data across your different SaaS providers. AWS has recognition as a trusted source there and bringing data together," he said.

Garman said that Amazon Q can democratize data and analytics for business roles as Q Apps serve as an abstraction layer.

The false narrative of Trainium vs. Nvidia. Garman was asked about competition with Nvidia and he noted that there wasn't any story there. He said:

"It's about more options for customers. If we can lower costs and more inference is done it's not going to be at Nvidia's expense. Nvidia is an incredibly important partner. I think the press wants to make it us vs. them but it's just not true."

Garman also noted that Graviton didn't take workloads from Intel or AMD.

On-prem AI workloads. Garman said the scale of genAI requires the cloud because the systems become so complicated quickly. He said:

"I think the scale required means AI is a cloud workload. You can take these smaller models and run them on premise today, and I do think that there's maybe some interesting things there. If you think about distilling smaller models or running inference at the edge, I do think that that is an interesting idea. But kind of training big models and things like that is a cloud centric thing. It's just not practical."

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AWS reInvent aws amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS adds capacity sharing, training plans to SageMaker HyperPod, marketplace for Bedrock

Amazon Web Services added capacity sharing and training plans to Amazon SageMaker Hyperpod and added Lumi AI and Poolside models to Amazon Bedrock's selection of third party models. AWS also launched Amazon Bedrock Marketplace.

The news was announced at a re:Invent 2024 keynote by Dr. Swami Sivasubramanian, VP of AI and Data at AWS. AWS add-on to the previous updates to SageMaker and Bedrock.

With Amazon SageMaker HyperPod, AWS added the following:

  • Capacity sharing and allocation and governance tools so enterprises to share large pools of compute, set priorities for teams, projects and tasks and schedule them based on priorities.
  • Training plans so customers can optimize genAI models based on hardware, budget, timelines and region constraints. Training plans will automatically move work across availability zones.
  • Recipes for data scientists and engineers to start training and fine-tuning popular foundation models in hours. These recipes are curated for training and ready to use for popular models. These recipes can be set up to swap hardware to optimize performance and lower costs.

For Bedrock, AWS added new models from Luma AI, a specialist in creating video clips from text and images, and Poolside, which specializes in models for software engineering. Amazon Bedrock has also expanded models from its current providers such as AI21 Labs, Anthropic, Cohere, Meta, Mistral, Stability.ai and Amazon.

The addition of Amazon Bedrock Marketplace will give customers the ability to try models and balance cost and performance. Amazon Bedrock Marketplace has access to more than 100 emerging and specialized foundation models.

Sivasubramanian said:

"With Bedrock, we are committed to giving you access to the best model for all your use cases. However, while model choice is critical, it's really just the first step when we are building for inference. Developers also spend lot of time evaluating models for their needs, especially factors like cost and latency that require a delicate balance."

Bedrock also was updated with intelligent prompt routing, which will automatically route requests among foundation models in the same family. The aim is to provide high-quality responses with low cost and latency. The routing will be based on the predicted performance of each request. Customers can also provide ground truth data to improve predictions.

More from re:Invent:

 

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AWS reInvent aws amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Amazon Bedrock vs. DIY approaches benchmarked

Amazon Bedrock handily outperforms do-it-yourself approaches for common generative AI use cases as platform-as-a-service simplifies enterprise adoption, according to a Constellation Research report by Holger Mueller.

The report landed as Amazon Web Services outlined Bedrock updates with AI orchestration and revamped SageMaker. The storyline for SageMaker and Bedrock is that they are better together. Bedrock is a serverless genAI platform that makes it easy for enterprises to build out from curated models. SageMaker is a platform that's designed for AI, data and machine learning workflows for more advanced and customized deployments.

Mueller's report looks at four use cases for Bedrock including agent creation and operation, retrieval-augmented generation (RAG), guardrails, and AI workflows. Mueller also outlines enterprise AI challenges and offers best practices.

The upshot is that platform-as-a-service offerings for genAI are going to become critical to enterprise AI adoption. Enterprises are struggling with lack of skills, multiple models, a pressure to pick winners and a breakneck innovation cadence. Enterprises are also finding traditional innovation best practices don't hold up with genAI.

Here are a few takeaways that stick out from Mueller's report on Amazon Bedrock benchmarking.

  • Agent creation with Amazon Bedrock takes 11 hours compared to 123 hours minimum.
  • RAG with AWS knowledge bases vs. DIY approaches takes 9 hours to 11 hours on Bedrock compared to a minimum of 84 hours for DIY.
  • "The results of this report are clear for CxOs: Do not go down the DIY path, but instead use PaaS tools such as Amazon Bedrock to achieve the outcomes your enterprise requires to be a winner in the AI era," said Mueller.

More from re:Invent:

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AWS reInvent aws amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Salesforce Q3 results, Q4 outlook mixed as Agentforce optimism abounds

Salesforce posted a mixed third quarter and fourth quarter outlook as revenue was up 8% from a year ago. The company saw revenue growth decelerate sequentially across multiple categories, but executives were bullish on Agentforce prospects.

The company reported third quarter earnings of $1.58 a share on revenue of $9.44 billion. Non-GAAP earnings in the third quarter were $2.41 a share. Salesforce said it took a hit of 17 cents a share due to investment losses.

Wall Street was looking for third quarter non-GAAP earnings of $2.44 a share on revenue of $9.34 billion.

As for the outlook, Salesforce projected fourth quarter sales between $9.9 billion to $10.10 billion compared to estimates of $10.05 billion. Salesforce projected non-GAAP earnings of $2.57 a share to $2.62 a share compared to estimates of $2.65 a share.

For fiscal 2025, Salesforce projected revenue of $37.8 billion to $38 billion.

CEO Marc Benioff said the company is seeing strong interest in its Agentforce effort.

Salesforce saw sales growth deceleration in integration and analytics (MuleSoft and Tableau) with third quarter revenue growth of 5%. Platform and other (think Slack) saw third quarter revenue growth of 8%, down from 10% in the second quarter.

Here's a look.

Constellation Research analyst Holger Mueller said:

"Cost for subscription and support is down - $70 million or 4.8%. Not sure if Salesforce let go of support people here -- but it maybe an indicator that old on premise instances are more expensive as Salesforce customers have been moving to public cloud. But running all these agents should be a bump up in cost."

On the earnings conference call, Benioff outlined how Agentforce is being deployed within Salesforce. Although Agentforce isn't turning up in Salesforce's remaining performance obligations executives were bullish. Here's a look at some of the key comments:

  • "We're seeing this demand for Agentforce, which just became available on October 24th, and we're already seeing this incredible velocity, more than 200 Agentforce deals just in Q3. It doesn't mean anything because the pipeline is in the thousands for potential transactions that are coming up in future quarters," said Benioff.
  • The company has deployed Agentforce on help.salesforce.com so enterprises can see agents in action.
  • Salesforce is trying to hire about 1,000 to 2,000 more salespeople.
  • "We expect that our own transformation with Agentforce on help.salesforce.com and in many other areas of our company is going to deflect between a quarter and a half of our annual case volume and in optimistic cases, probably much, much more of that," said Benioff.
  • Salesforce is customer zero for Agentforce. "We're deploying Agentforce to engage our prospects on Salesforce.com, answering their questions 24x7 as well as handing them off to our SDR team," said Brian Millham, President of Salesforce. "We'll use our new Agentforce SDR agent to further automate top-of-funnel activities from gathering leads, lead data for providing education and qualifying prospects and booking meetings.
  • Agentforce deals are usually part of Service Cloud offerings. "Service Cloud is our largest cloud and our initial Agentforce opportunity is with our Service Cloud customers right now and we saw a ton of add-ons happening in our customer base with Service Cloud. But what our customers also recognize is that this is a platform," said Millham, who added Sales Cloud, Marketing Cloud and Data Cloud will also see Agentforce add-ons.
salesforce

AWS unveils next-gen Amazon SageMaker in bid to unify data, analytics, AI

Amazon Web Services outlined its next-generation Amazon SageMaker platform that will combine data, analytics and AI.

The move has multiple components, but in a nutshell AWS is tightly integrating data prep, integration, big data, SQL analytics, machine learning and generative AI. The headliner was SageMaker Lakehouse, which unifies data lakes, data warehouses, databases and enterprise applications and makes them available for queries.

Constellation Research analyst Doug Henschen said the SageMaker effort is notable.

"I was very impressed by AWS's SageMaker announcements at AWS re:invent2024. The new, unified SageMaker consolidates all data workloads and puts AI at center, where it belongs today. It builds on DataBricks' original, single-platform vision and goes further to consolidate and unify data work and workloads than Microsoft's moves with Fabric and Google Cloud's moves with BigQuery."

Here's a look at what was announced in addition to SageMaker Lakehouse:

  • SageMaker Unified Studio gives enterprises the ability to find and access data and combine it with AWS analytics, machine learning and AI tools. Amazon Q Developer is also integrated.
  • SageMaker Catalog has built-in governance.
  • SageMaker Lakehouse will enable data to be queried in SageMaker Unified Studio or query engines compatible with Apache Iceberg.
  • Zero-ETL integrations with various SaaS applications so data is available in SageMaker Lakehouse and Amazon Redshift without complex data pipelines.
  • SageMaker Unified Studio offers one interface to combine a bevy of AWS services currently in SageMaker.

AWS CEO Matt Garman said:

"Over the next year, we're going to be adding a ton new capabilities to the new SageMaker--capabilities like AutoML, new low code experiences, specialized AI service integration, stream processing and search and access to more services and data in a single unified UI."

Constellation Research analyst Holger Mueller said:

"It is not long ago when former AWS CEO, now Amazon CEO, would say that large product suites and offerings would slow down innovation and this hurt customers. The upside though is that it is a reduction of complexity for enterprises for data and AI. AWS decided to merge the data and AI services into a single platform, rightfully picking the higher level offering with SageMaker as the new brand. The big news apart from the bundling is the new Lakehouse underpinning the new Amazon SageMaker Studio."

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AWS reInvent aws amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Amazon Q Business gets a story at AWS re:Invent 2024

Amazon Q Developer has had a straightforward story in that it makes software development easier, generates code and now is aimed at legacy infrastructure--.NET migrations, VMware workloads and mainframe transformations. In comparison, Amazon Q Business typically generated blank stares. At AWS re:Invent 2024, that reality may be changing a bit.

Here's how AWS filled out the Amazon Q Business narrative at re:Invent.

  • Amazon Q Business can be directly embedded into applications. Customers can also create a cross-application index that can enhance experiences across applications. Users can use Q embedded to take actions across multiple applications.
  • Q Business can create complex automation workflows from natural language, and operating procedure documents and videos.
  • Amazon Q Business is being combined with QuickSight in a move that'll provide step-by-step instructions to drive decision-making.

AWS CEO Matt Garman said:

"What Q Business does is it connects all your different business systems, your sources of enterprise data, whether those come from AWS, third party apps, and internal sources."

What Q Business really becomes is an index that can serve as an automation base. "The power of Q business is that it creates this index of all of your enterprise data. It indexes data from Adobe, from Atlassian, from Microsoft Office, from SharePoint, from Gmail, from Salesforce, from ServiceNow and more," said Garman.

And by combining Q Business with QuickSight, AWS provided a solid analytics hook for enterprises and a customer base. Garman said:

"We're bringing together QuickSight Q and the Q Business data together. We'll use all of that data to show you one view inside of QuickSight making it much more powerful as a BI tool."

Should AWS' Q Business plan work out, the service will be a horizontal enabler of AI agents and workflow automation. Simply put, Q Business has a cleaner story today as an enterprise data index, analytics enabler and automation engine.

Constellation Research analyst Holger Mueller said:

"Amazon's outsized ambition for Q got a little more tangible today, with Amazon explaining how data layers and integration will work for third party applications. AWS has already shown it can capture the data foundation for all data of an enterprise, now it will have to show the merits that Q can unleash with GenAI."

More:

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AWS reInvent aws amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

PagerDuty integrates with Amazon Bedrock, Q Business: Will it boost large enterprise traction?

PagerDuty's increased integration with Amazon Web Services, Amazon Bedrock and Q Business is likely to give the company's strategy to target larger enterprises a lift.

At AWS re:Invent 2024, PagerDuty CEO Jennifer Tejada joined AWS CEO Matt Garman on stage to tout the company's new collaboration. PagerDuty Advance will be integrated into Amazon Q Business, Amazon Bedrock and Amazon Bedrock Guardrails.

PagerDuty provides observability and incident management tools. PagerDuty and AWS already have nearly 6,000 joint customers. PagerDuty's Operations Cloud detects and diagnoses disruptive events, coordinates response and streamlines workflows.

The company has turned up during multiple customer presentations at re:Invent. For instance, Goldman Sachs outlined a mainframe migration and had PagerDuty in an architecture slide.

According to AWS and PagerDuty, new integrations include:

  • PagerDuty Advance will be integrated into Amazon Bedrock to provide situational awareness through chat interactions. PagerDuty Advance is the company's genAI offering that features an assistant that leverages the company's data model.
  • PagerDuty Advance also will be embedded into Amazon Bedrock Guardrails to ensure accuracy of query responses from models.
  • In Amazon Q Business, PagerDuty is the first incident management platform to integrate. PagerDuty Advance customers will use one interface via Amazon Q Business plugins. PagerDuty said that early adopters said they saved an average of 30 minutes per incident with the integration.

The AWS integration comes a week after PagerDuty reported solid third quarter results and traction targeting larger enterprises.

PagerDuty reported a third quarter net loss of 7 cents a share on revenue of $118.9 million, up 9% from a year ago. Non-GAAP third quarter earnings were 25 cents a share to top estimates.

The company projected fourth quarter revenue of $118.5 million to $120.5 million, up 7% to 8% from a year ago. For fiscal 2025, PagerDuty is projecting revenue of $464.5 million to $466.5 million, up 8%.

Constellation ShortListâ„¢ Incident Management

Speaking on an earnings conference call, PagerDuty CEO Tejada said:

"We were pleased to see stabilization across all segments in the quarter, with retention improving across the board. That said, we remain focused on growth reacceleration and there is room for improvement, particularly on large deal conversions. We had an unusual number of large Q3 opportunities defer, and while they are not lost, these will delay ARR acceleration to FY '26. Nonetheless, we are encouraged by improvements in several key indicators, including dollar-based net retention, multi-product adoption, enterprise contract duration, and total pipeline growth."

Tejada noted that PagerDuty is seeing strength among technology, financial services and telecom customers. The plan for PagerDuty is to land and expand with large enterprises.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS launches Amazon Nova foundation models in commoditization play

Amazon Web Services launched Amazon Nova, a series of foundation models available in Bedrock, in a move that aims to provide large language model choice and commoditize the market.

Think of Amazon Nova as the Trainium and Inferentia strategy applied to genAI models. AWS is betting that enterprises will follow the money and opt for Amazon Nova on Trainium with the Bedrock stack. 

The models include Amazon Nova Micro, Nova Lite, Nova Pro and Nova Premier with additional models on deck. While that's the news, it's worth thinking through the big picture of how AWS is approaching models.

AWS' bet is that LLMs will be a commodity that will be mixed and matched depending on the task at hand. Speaking during the AWS re:Invent 2024 keynote, Jassy said that Nova will be tightly integrated with AWS services to deliver lower latency and price performance.

Jassy said during the re:Invent 2024 keynote that the company is focused on choice and noted that there will be multiple models used in applications. Jassy noted that the Alexa rebuild will use multiple models. Jassy said:

"We are learning the same lesson over and over and over again, which is that there is never going to be one tool to rule the world. It's not the case in databases. It's not the case in analytics. We were talking about how everybody thought the TensorFlow was going to be the one AI framework. There were a lot of them and Pytorch ended up being the most popular one. The same is going to be true for models. Our internal builders have been asking for all sorts of things from our teams that are building models. They want better latency. They want lower cost. They want the ability to fine tuning. They want the ability to better orchestrate across their different knowledge bases, to be able to ground their data. They want to take lots of automated, orchestrated actions, or what people call agentic behavior. They want a bunch."

Key points:

  • Amazon Nova will add speech-to-speech and any-to-any models coming soon.
  • Amazon Nova Canvas will focus on image generation and Reels will generate video.
  • Nova aims to be 75% more cost effective.
  • Integrated into Bedrock, support fine tuning and be optimized for agentic AI.

Jassy concluded:

"We always provide you selection everything we do, which is that we are going to give you the broadest and best functionality you can find anywhere. It's going to mean choice. You are going to use different models for different reasons at different times, which is the way the real world works. Human beings don't go to one human being for expertise in every single area. You have different human beings who are great at different things."

Constellation Research analyst Holger Mueller said:

"AWS reverts its position on LLMs and gets in the market with its Nova models. It's a sign that Amazon / AWS realize they need LLM offerings for both in-house and customer use cases. This will temporarily affect its 'Switzerland' of AI position. Its strong appeal was for LLM vendors was to partner with Bedrock whilst there was no in-house LLM competition. But AWS knows how to partner."

Data to Decisions AWS reInvent aws amazon Chief Information Officer