Results

IBM, Meta form AI Alliance: What can it accomplish?

IBM, Meta form AI Alliance: What can it accomplish?

IBM and Meta have formed the AI Alliance with 50 founding members aiming to provide "open and transparent innovation" in artificial intelligence. The catch is that many of the big names--Google, Nvidia and Microsoft--driving the AI advances so far aren't among the founding members.

The concept behind the AI Alliance is notable in that the group wants to bring international players, academics, researchers, governments and developers together to provide an open science and technology ecosystem. The tension around technologies like generative AI revolves around for-profit motives vs. doing what's right for humanity. Security expert Bruce Schneier just penned a long missive about AI and trust worth a read.

And IBM and Meta have a history of contributing to open source as well as initiatives such as Open Compute Project. However, it remains to be seen what the AI Alliance ultimately accomplishes without some key AI leaders.

Also see: How Generative AI Has Supercharged the Future of Work | Generative AI articles | Why you need a Chief AI Officer | Software development becomes generative AI's flagship use case | Enterprises seeing savings, productivity gains from generative AIWork in a generative AI world will need critical, creative thinking

The founding members and collaborators include the following: AMD, Anyscale, CERN, Cerebras, Cleveland Clinic, Cornell University, Dartmouth College, Dell Technologies, EPFL, ETH, Hugging Face, Imperial College London, Intel, INSAIT, Linux Foundation, MLCommons, MOC Alliance operated by Boston University and Harvard University, NASA, NSF, Oracle, Partnership on AI, Red Hat, Roadzen, ServiceNow, Sony Group, Stability AI, University of California Berkeley, University of Illinois, University of Notre Dame, The University of Tokyo, Yale University and others.

Constellation Research analyst Andy Thurai was quick to note who was missing. He said:

"Major A list players such as AWS, Databricks, Snowflake, Dataiku, Data Robot, Domino Data Lab,  Scale.AI, NVIDIA, Microsoft, OpenAI, Google, Anthropic, Cohere, AI 21 Labs and many others are missing from the list."

Like any alliance, federation or task force by any other name, the jury is out until something actually happens. The AI Alliance indicated that it will be action oriented and focus on "fostering an open community and enabling developers and researchers to accelerate responsible innovation in AI while ensuring scientific rigor, trust, safety, security, diversity and economic competitiveness."

Focus areas for AI Alliance include:

  • Developing benchmarks and evaluation standards for responsible development of AI systems as well as tools for safety, security and trust.
  • Advance an ecosystem of open foundation models with diversity on many fronts.
  • Create an AI hardware accelerator ecosystem.
  • Support AI skills and research.
  • Develop educational content and resources.

Meta and IBM's experience in open source as well as open hardware could come in handy for kicking off AI Alliance. Simply put, the group is worth watching pending further developments.

Thurai said it's unclear what AI Alliance will ultimately accomplish. He said:

"There are many tools and software components, both open source and commercial already available focused on AI Alliance initiatives. Depending on what is measured there are standards out there. For example, tools like GAIA, MLPerf, and other tools to measure F1 score, model performance, accuracy, and drift are available. I am not sure what the alliance is offering at this point. There are some promises made which are yet to be seen. Tools like Fairlearn, AI fairness 360, Accenture Fairness Tool, etc. to measure model fairness are already available. Some popular benchmarking tools include DAWNBench, GLUE, and SQuAD.”

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity meta IBM AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

New Quantum Computing Road Map Towards Utility | Interview with IBM Partner Heather Higgins

New Quantum Computing Road Map Towards Utility | Interview with IBM Partner Heather Higgins

It's safe to say #quantumcomputing is hard. That's why IBM has expanded its latest quantum road map to include a holistic view of scale, quality, and speed - shifting from an era of DISCOVERY to an era of UTILITY...

IBM's newly launched road map offers:

📌 Optimal balance point between scale, quality, and speed, including error mitigation run-time to unlock #business utility.
📌 Acceleration in error-correction techniques, offering a faster path to #modularity and quantum-centric #supercomputing.
📌 Broader accessibility of quantum systems and problems, through abstracting and simplifying techniques.

In an interview with Constellation analyst Holger Mueller, IBM Quantum Partner Heather Higgins advises C-level decision-makers to start preparing now for quantum #technology through a purposeful investment approach and strategy that fits organizational needs. Holger agrees that paying attention to the rapid acceleration of quantum technology could solve key problems for #enterprises.

On CR Conversations <iframe width="560" height="315" src="https://www.youtube.com/embed/2OUosUHDNwc?si=rShdMvOItKgYxhuI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
Media Name: YouTube Video Thumbnails (3).png

IBM launches latest Heron quantum processor, IBM Quantum System Two

IBM launches latest Heron quantum processor, IBM Quantum System Two

IBM launched its IBM Quantum System Two as well as its next-generation quantum processor as the company touts what's possible today with quantum computing.

At the company's IBM Quantum Summit in New York, Big Blue launched IBM Quantum Heron, a 133-qubit processor that's designed to deliver performance and low error rates relative to its predecessor.

Three Heron processors will power the IBM Quantum System Two, which is the company's first modular quantum computer. Modular quantum processors are being pursued by a bevy of industry players as quantum computing is likely to blend with supercomputing at first. Quantum System Two is located in Yorktown Heights, NY.

IBM also outlined its development roadmap to 2033 with a focus on the quality of gate operations, quantum circuits and running at scale.

Constellation Research CEO Ray Wang said:

“Quantum computers need quantum chips and IBM Quantum System Two addresses the need for faster and more precise error mitigation and correction before we achieve mass adoption. The path to the Blue Jay system by 2033 has been one of IBM's most consistent achievements in computing.”

IBM demonstrated how its 127-qubit IBM Quantum Eagle processor can power systems used as scientific tools to solve chemistry, physics and materials problems.

With IBM Quantum Heron, IBM is planning to use a 5x performance boost with improved error rates to build out a fleet of systems. These systems will mostly be available to customers via cloud platforms. In many respects, quantum computing is entering a more accessible era. 

Meanwhile, IBM outlined a Quantum Development Roadmap with future processor plans and systems. Here's the roadmap (larger format).

IBM also detailed a software stack that will use generative AI and watsonx to make quantum software programming easier. IBM is taking Qiskit 1.0 and adding Qiskit Patterns, which enables developers to create code more easily. Combined with Qiskit Runtime and Quantum Serverless and IBM is looking to create an entire quantum computing stack for multiple scenarios.

Data to Decisions Tech Optimization Innovation & Product-led Growth IBM Quantum Computing Chief Information Officer

MongoDB Atlas Vector Search, Atlas Search Nodes generally available

MongoDB Atlas Vector Search, Atlas Search Nodes generally available

MongoDB announced the general availability of MongoDB Atlas Vector Search and MongoDB Atlas Search Nodes in the latest in a series of announcements to enable developers to more easily build generative AI applications.

The general availability of MongoDB Atlas Vector Search and Atlas Search Nodes follows up from a June launch where the company set out to enable AI and large language model (LLM) workloads.

Customers using MongoDB Atlas Vector Search and Atlas Search Nodes include AT&T Cybersecurity, Pathfinder Labs and UKG.

Since that launch MongoDB has had a steady cadence of launches. For instance, at AWS re:Invent, the company said it would integrate MongoDB Atlas Vector Search with Amazon Bedrock and announced it would optimize Amazon CodeWhisperer suggestions for MongoDB developers.

MongoDB steps up generative AI rollout across platform

Those horizontal services are also being complemented by MongoDB's focus on industry specific use cases for healthcare, public sector, manufacturing and automotive as Atlas ramps.

MongoDB, along with companies such as Databricks and Snowflake, are aiming to help developers and enterprises leverage real-time data to build generative AI applications. The argument is that AI applications can't be built efficiently without strong data strategies. Atlas Vector Search and Atlas Search Nodes are designed to give enterprise the capability to search real-time data efficiently.

Key points about MongoDB's two additions:

MongoDB Atlas Vector Search, which is an integrated vector database for MongoDB. Developers can use one API to build generative AI applications across AWS, Microsoft Azure and Google Cloud without duplicating and synchronizing data. MongoDB Atlas Vector Search also enables enterprises to use retrieval-augmented generation (RAG) with pre-trained foundation models.

Atlas Vector Search integrates with models from LangChain, Cohere, OpenAI, LlamaIndex, Hugging Face and Nomic.

MongoDB Atlas Search Nodes provide dedicated infrastructure to manage workloads using MongoDB Atlas Vector Search and Atlas Search independent of operational nodes of the database. This workload isolation performs better at scale and optimizes costs.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity mongodb Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

AWS, Microsoft Azure, Google Cloud battle about to get chippy

AWS, Microsoft Azure, Google Cloud battle about to get chippy

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly.

Hyperscalers are getting chippy as they compete for generative AI workloads. It's not that cloud providers were ever exactly collegial, but as growth hits the law of large numbers, enterprises optimize cloud spending and generative AI threatens market share standings you can expect sniping between the big vendors.

This chippy reality was made clear during AWS CEO Adam Selipsky's keynote at re:Invent. He made a handful of not-so-veiled references to Microsoft Azure, its ties to OpenAI and its recent foray into custom silicon.

When Selipsky talked about Graviton4, he noted that "other cloud providers have not delivered on their first server processors." With new versions of Trainium and Inferentia processors he said something similar. The snark revved up when Selipsky was talking about model choices and Amazon Bedrock.

"You don’t want a cloud provider that’s beholden primarily to one model provider, you need a real choice. The events of the past 10 days have made that very clear,” said Selipsky.

When talking security, Selipsky cruised by a headline about Microsoft and OpenAI.

Clearly, the OpenAI fiasco made it clear that you may not want to hitch your generative AI wagon to one model. Microsoft's investment in OpenAI gave the company a head start in generative AI and copilots, but that innovation bet almost backfired when OpenAI CEO Sam Altman was booted, hired by Microsoft and then reinstated at OpenAI. You'd be a fool if you were an OpenAI customer and not thinking about diversification.

However, it's also worth noting that Azure has its own plans for model choice and models as a service. Microsoft CEO Satya Nadella has gone out of his way to say that the company's AI plans go beyond OpenAI and there are multiple models available.

For AWS, which has been the cloud leader from the launch of the industry, the competition from Azure must feel odd. Perhaps Selipsky's swipes work out over time. In my experience talking trash can work, but often doesn't. In technology, the only executive I've seen really pull off the chippy vibe is Oracle CTO Larry Ellison. You can argue that Ellison's version of the truth is sometimes stretched, but he has a knack for punching competitors in the head.

Frankly, I'm a bit surprised that Oracle didn't rent out the Sphere during re:Invent. Or maybe Google Cloud just beat Oracle to it.

Where is all this headed? You already know. It's a return to the fear, uncertainty and doubt age. Here are a few techniques you can use to navigate the new sniping between cloud providers:

Think "The Art of War," arguably the best business book ever written (even though it was technically about war). That Art of War lens gives you insights into the Oracle-Azure partnership. I'm pretty sure Microsoft and Oracle are far from chummy, but they have mutual enemies in AWS and Google Cloud. And Oracle Cloud Infrastructure isn't big enough to threaten Azure yet, so the Satya-Larry bromance is in full bloom.

Fact checks. Cloud providers could go full election mode and that requires that you fact check statements from executives.

Remember how you were treated in the "optimization" phase. The last 18 months have seen enterprises optimize their cloud spending plans. Remember the hyperscalers that worked with you to hit budgets and act accordingly. Trust matters.

Diversify and play cloud giants off each other. All of those enterprise software lessons such as waiting until the last minute of a quarter, evaluating rivals and working discounts will apply to cloud negotiations. You may also consider on-premises options too. Bottom line: You should be able to get a discount.

Generative AI has the potential to rewrite the cloud standings and hyperscalers are going to scrap it out accordingly. For the record, here's a look at the hyperscaler growth rates in their most recent quarters:

  • Microsoft Cloud fiscal Q1 revenue was $31.8 billion, up 24% from a year ago. Microsoft Cloud includes Azure and other cloud services, Office 365 Commercial, the commercial portion of LinkedIn, Dynamics 365, and other commercial cloud properties.
  • AWS Q3 revenue was $23.1 billion, up 12% from a year ago.
  • Google Cloud (IaaS and Workspace) Q3 revenue was $8.4 billion, up 22% from a year ago.
  • Oracle Cloud (IaaS and SaaS) Q1 revenue was $4.6 billion, up 30% from a year ago.

Here's the week from the Constellation Research team at re:Invent.

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience AWS reInvent aws Google Google Cloud Microsoft amazon SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

UiPath's bets paying off: Here are the key Q3 takeaways

UiPath's bets paying off: Here are the key Q3 takeaways

UiPath is gaining momentum as high profile partnerships, AI and automation and a focus on large enterprises and industries pay off.

The automation platform company reported better-than-expected third quarter sales of $326 million, up 24% from a year ago, and annual recurring revenue of $1.38 billion. UiPath reported a net loss in the third quarter of $31.54 million, or 6 cents a share. Non-GAAP earnings in the quarter were 12 cents a share.

For the fourth quarter, UiPath projected revenue between $381 million to $386 million.

Constellation Research analyst Holger Mueller said:

"UiPath is on strong growth trajectory, fueled by the need of enterprises to streamline processes. The current AI hype has helped UiPath further and that is slowly but steady growing its way to productivity."

Mueller added that if UiPath Co-CEOs Rob Enslin and Daniel Dines can continue growing revenue faster than costs they'll delight investors.

Indeed, UiPath closed a record number of third quarter deals over $1 million in ARR. Customers with $1 million or more in ARR grew 31% to 264, while customers with $100,000 or more in ARR increased to 1,974.

Here's a look at the key takeaways from UiPath's third quarter results.

UiPath landing more C-level conversations. Enslin, an SAP and Google Cloud alum, knows how to sell and he's revamped UiPath's sales approach to focus on value, platform plays and prioritizing "organizations that have a meaningful runway to invest in enterprise automation over the long-term." Partnerships with SAP and Deloitte are also putting UiPath on the CXO radar earlier in the budget cycle.

"Because we are having these conversations in the boardroom to the C-level suite, we are much earlier in the budget cycles than we previously been," said Enslin.

Simply put, much of UiPath's success can be attributed to a better go-to-market ground game. UiPath has a large enterprise installed base that started with robotics process automation (RPA) and is receptive to UiPath's automation platform and other tools such as document understanding, test suite and process mining.

UiPath adds AutoPilot generative AI to its automation platform: Here's what it means | Every vendor wants to be an automation platform | Constellation ShortList™ Robotic Process Automation

UiPath's SAP partnership showing early returns. The SAP-UiPath partnership is still young, but the combination has yielded some benefits. Notably, UiPath is involved in more transformation discussions with SAP and systems integrators. The primary use case highlighted is the automation of testing in big SAP environments.

SAP buys LeanIX, aims to couple it with Signavio, system transformation

Industry-focused use cases. UiPath is landing large healthcare and federal government deals. Enslin walked through multiple customer references--some named and others not. He cited a large non-profit US health system that has garnered more than $250 million in ROI since starting in 2018 with RPA. That health system is now using document understanding, process mining and centralized automation on UiPath.

UiPath's Enslin noted that UiPath has launched industry playbooks and 70 solution accelerators in its marketplace. Government customers cited were Veterans Affairs, Coast Guard, the IRS, Department of Homeland Security and US Department of Agriculture.

Document Understanding a focus area. UiPath's Document Understanding product line revolves around getting RPA to recognize documents, classify them and process them. The idea is Document Processing can automate paperwork tasks. UiPath has now launched Intelligent Document Processing (IDP), which is an AI-powered feature where generative AI can annotate documents, classify them and extract information to answer questions and summarize.

The term document understanding was mentioned 16 times during UiPath's third quarter earnings conference call. For context, RPA received 7 mentions and process mining had two. There's a reason Document Understanding is getting some play. Dines said:

"Our next-gen IDP permits almost anyone to train Specialized AI models for specific domains and document types, and our internal benchmarking shows that our next-gen IDP experience accelerates model training time by up to 80% from a week to a day for complex scenarios, or down to minutes for simpler forms."

AI and automation are connected and giving UiPath a boost. Enslin said CXOs realize that AI by itself won't generate the returns without automation. He added that in many cases, AI spending is going along with automation conversations.

He said:

"The AI communication that's happening in the market allows us to have meaningful conversations with customers and they see why the platform is relevant. Customers can really look at processes and tasks and understand how to drive value with the automation platform."

Dines added:

"AI plus automation is the thing that drives the biggest outcomes for our customers."

Related:

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Chief Information Officer

Amazon Q Puts GenAI Inside Redshift, Coming to AWS Glue

Amazon Q Puts GenAI Inside Redshift, Coming to AWS Glue

It’s a wrap at AWS Re:Invent, but here’s my take on two more data-and-analytics-related announcements from Las Vegas: Amazon Q in Redshift and Amazon Q in AWS Glue.  

 

Amazon Q Recap

 

To my mind, Amazon Q was the most broadly compelling and exciting GenAI announcement at AWS Re:Invent 2023. Amazon Q is a multi-purpose AI assistant for businesses that’s designed to develop a company-specific understanding of information including data, text, code, and technology systems in use. At this point there are 40 connectors to enterprise applications and systems including Salesforce, Zendesk, ServiceNow, Office 365, Dropbox and more.

 

Amazon Q is also designed to deliver a personalized experience to the individual user, limiting access to information based on their role and data-access permissions. The assistant will be available within a growing number of interfaces. Behind the scenes, Q will choose from a variety of GenAI models available in Amazon Bedrock based on the context of where it’s used. Exposed withing the AWS Console, for example, Q will help with cloud troubleshooting and best-practice recommendations. Within an integrated development environment (IDE), Q will help developers generate, test and troubleshoot code. Exposed through Amazon QuickSight (one of the first use cases announced this week), Q will support natural language (NL) query and explanations powered by GenAI.

 

 

Amazon Q in Redshift and Q in Glue

 

This brings us to Amazon Q in Redshift, which will be exposed through the Amazon Redshift Query Editor, the data warehouse service’s web-based SQL editor. Users will simply ask NL questions and Amazon Q will generate SQL recommendations, using the appropriate large language model (LLM) from Amazon Bedrock.

 

According to AWS, Amazon Q will use different techniques, such as prompt engineering and Retrieval Augmented Generation (RAG), to query the model based on context including the database instance, the schema, the user’s query history, and, optionally, the query history of other users connected to the same endpoint. What’s more, Q will remember previous questions and can be used to refine a previously generated query.

According to an AWS blog: “The SQL generation model uses metadata specific to your data schema to generate relevant queries. For example, it uses the table and column names and the relationship between the tables in your database. In addition, your database administrator can authorize the model to use the query history of all users in your AWS account to generate even more relevant SQL statements.”

From a security perspective, Q won’t share query histories with other AWS accounts and it won’t train underlying GenAI models with any data coming from customer AWS accounts. Amazon Q in Redshift is in preview in two U.S. regions (East and West).

Also announced at Re:Invent was Amazon Q in AWS Glue, which is the cloud vendor’s extract, transform, load (ETL) data-integration service. Here, too, the GenAI will generate SQL code, but in this case for ETL jobs and pipelines rather than queries. Q will also support troubleshooting and help assistance. This service is “coming soon” and is not yet available in preview.

MyPOV on Amazon Q in Redshift and Glue

Writing queries and developing SQL ETL jobs and pipelines is tedious, time-consuming work. Code generation, whether for SQL, Python or any other language, has already been proven to be a time- and labor-saving use case for GenAI. Competitors are also pursuing this use case, with Google Cloud having announced GenAI in RedShift rival BigQuery via Duet AI with BigQuery and Duet AI in BigQuery Studio, both of which are in preview at this writing. And in the integration space, vendors including Boomi, Informatica, Snap Logic and Software AG have already jumped on the GenAI bandwagon. 

There are no charges for Amazon Q in Redshift while it’s in preview, but it’s a fair guess that once this feature is generally available, AWS will pass through compute costs, at a minimum, likely through consumption of Redshift Processing Units (RPUs). To my mind the costs of natural language code generation, testing and troubleshooting, and querying and explanation will be well worth it, but it will be up do organizations to understand the value of time savings and making people that much more productive. The danger is that the bean counters and budget holders will have a knee-jerk reaction when the costs of GenAI start to emerge.

While 2023 will go down as the year of GenAI previews, 2024 promises to be the year that the GenAI bills will start to come due. Will need the proverbial business-IT collaboration to develop a clear-eyed understanding of what’s really delivering value.  

Related reading:
AWS Expands Zero-ETL Options, Adds AI Recommendations for DataZone
AWS Introduces Two Important Database Upgrades at Re:Invent 2023

Google Sets BigQuery Apart With GenAI, Open Choices, and Cross-Cloud Querying
 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AWS reInvent aws ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer Chief Executive Officer Chief AI Officer Chief Product Officer

Dell Technologies delivers mixed Q3, but strong demand for AI-optimized servers

Dell Technologies delivers mixed Q3, but strong demand for AI-optimized servers

Dell Technologies' infrastructure unit saw strength in AI-optimized servers, but revenue in the third quarter for the data center group was down 12% from a year ago.

The results highlight the moving parts for data center vendors. AI-optimized gear is selling well as traditional storage and server systems lag. HPE saw similar issues in its quarter.

Overall, Dell Technologies reported third quarter earnings of $1 billion, or $1.36 a share, on revenue of $22.3 billion, down 10% from a year ago. Non-GAAP earnings were $1.88 a share.

Wall Street was expecting Dell Technologies to report third quarter earnings of $1.45 a share on revenue of $22.9 billion.

In a sign of its revenue model transformation, Dell said it had remaining performance obligations of $39 billion. Recurring revenue in the quarter was up 4% from a year ago and deferred revenue was up 7%.

Infrastructure group revenue in the third quarter was $8.5 billion, down 12% from a year ago. Server and networking revenue was $4.7 billion and storage revenue were $3.8 billion.

According to the company, AI optimized servers accounted for 33% of total server orders revenue. Demand was driven by AI focused cloud providers and companies in key verticals. AI optimized server backlog nearly doubled in the third quarter from the second quarter.

Dell's PC business had third quarter revenue of $12.3 billion, down 11% from a year ago. The company said an aging installed PC base and AI-enabled systems from Intel, AMD and Windows on ARM should drive a refresh cycle.

Jeff Clarke, chief operating officer of Dell Technologies, said the company expects fiscal 2025 to show revenue growth as systems are upgraded for generative AI use cases.

Clarke said in prepared remarks:

"The demand environment for traditional servers improved over the course of the quarter, and demand for AI servers continues to be strong across a wider range of customers. Demand for storage was down, as expected."

He added that Dell Technologies was seeing sequential growth in AI optimized services. Clarke said the company "began to convert more PowerEdge XE9680 backlog into revenue."

"For the quarter, we shipped over half a billion dollars of AI optimized servers, including our XE9680, XE9640, XE8640 and R750 & 760xa servers," said Clarke, who added that "demand remains well ahead of supply."

Clarke was optimistic about AI driving demand for Dell systems.

"AI continues to dominate the technology and business conversation. Customers across the globe are turning their operations upside down to see how they can use generative AI to advance their businesses in meaningful ways. These AI initiatives are being driven at the CEO and board levels. And as a result, we are at the front of a significant TAM expansion."

Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Next-Generation Customer Experience dell SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Amazon's Vogels says 'cost awareness is a lost art' as AWS launches optimization tools

Amazon's Vogels says 'cost awareness is a lost art' as AWS launches optimization tools

"Cost awareness is a lost art. We need to regain that art," said Amazon CTO Dr. Werner Vogels, speaking at Amazon Web Services' re:Invent conference. AWS also launched a new tool to manage application resources within the AWS Management Console as well as CloudWatch Application Signals to automatically instrument applications.

Vogels talk revolved around cost optimization of compute resources and aligning business and technology decisions. The keynote, which used multiple lessons learned from building AWS services, focused on being a frugal systems architect. By focusing on costs and continual improvement, Vogels argued enterprises can be more sustainable, evolve more quickly and pay down technical debt.

The subtext of the talk also seems to indicate that AWS is leaning into its strengths of being a low-cost cloud provider. Enterprises have been optimizing cloud spending, consolidating vendors and focusing on efficiency. This optimization of cloud spending does not appear to be a passing fad.

As Vogels spoke, AWS launched Amazon CloudWatch Applications Signals in preview. The service automatically instruments applications to monitor latency, availability and performance. Amazon CloudWatch Applications Signals aggregates metrics, traces, logs, real user monitoring and puts together telemetry.

In addition, AWS announced the general availability of myApplications, which is part of the AWS Management Console. MyApplications provides a summary that analyzes costs, usage and resource usage across AWS.

Vogels argued that enterprises should be able to track costs throughout applications, web pages and data flows.  "In AWS, each of the resources that you've been using, comes with a total associated cost of every single one of the surfaces. We know the cost of the whole system," said Vogels.

Other key cost points from Vogels chat:

  • Cost is a close proxy for sustainability. "We pay for each individual resource we're using, and this is a pretty good approximation to the resources that you've used," said Vogels, who added that builders need to architect systems for sustainability. "We want to be frugal in a way that these resources are sustainable."
  • Cost is a non-functional requirement. Vogels said building systems boil down to design, measure, and optimize. "The most important thing here is that cost needs to be a nonfunctional requirement. If you think about non-essential requirements, you know, there's all these sorts of classical ones: security, compliance, performance, reliability. But the ones that you have to keep in mind at all times. Sustainability should be another one.

  • Align business and technology decisions as well as costs and revenue models. "Make sure that your business issues and the technology decisions are in harmony with themselves," said Vogels. If business and technology are aligned, you should be able to grow revenue while optimizing costs.
  • Pay off technical debt. To design systems that can evolve you will need to pay off technical debt. Like economic debt, the interest compounds and at some point, crushes you, said Vogels. "You and I are creating technical and economic debt," added Vogels, who said enterprises have to continually retire tech debt.

Vogels argued that enterprises will need to build cost aware architectures that are continually optimized.

Werner also talked a bit about developer productivity in the age of generative AI as well as SageMaker features. He said there's a new age of developer productivity ahead, which incidentally also plays into cost optimization, and elaborated in a blog

Constellation Research analyst Holger Mueller said:

"Long gone are the re:Invents when the Dr. Werner Vogels keynote had half of the announcements. The Amazon CTO has pivoted to developer and architecture best practices. Last year's key theme was all about moving to an event driven architecture (EDA) -- most roadmap items did not get delivered due to the focus on GenAI -- so this year was all about a focus on cost consciousness. At the same time it was up to Vogels to show where AWS will embed Q to help developers become more productive. Vogels balances the inherent fear of developers over AI replacing them with pointing out to the upside. Three product announcements made it to his keynote. Not bad for the cloud veteran, who is a developer favorite and maybe even an idol." 

Here's the week from the Constellation Research team at re:Invent.

 

Tech Optimization AWS reInvent aws amazon SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

Humanizing AI Commerce | Impact TV Episode 4

Humanizing AI Commerce | Impact TV Episode 4

📺 Watch the latest episode of Impact TV: Humanizing AI Commerce

Co-hosts R "Ray" Wang, founder of Constellation Research and Teresa Barreira, CMO of Publicis Sapient explore the powerful synergies between #technology and the human touch, and how this fusion can impact the #digital commerce landscape and the people it serves...

Ray and Teresa sit down with the following #data and #AI experts to learn more:

00:00 - Introduction
01:56 - Simon James, Head of Data and AI, Publicis Sapient
18:45 - Indy Cho, AVP of Data Products & Data Science, Costco Wholesale

🗓? Are these insights and recommendations helpful? Follow our page to catch Impact TV episode 5 coming down the pipeline in a few weeks! #technology #business #trends #cxos #commerce

On <iframe width="560" height="315" src="https://www.youtube.com/embed/6PftXAnt4TY?si=DsJ2x_Z4LlAobuqW" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>