Results

Snowflake, Anthropic ink LLM partnership, delivers strong Q3, acquires Datavolo

Snowflake, Anthropic ink LLM partnership, delivers strong Q3, acquires Datavolo

Snowflake said it has inked a multi-year deal to bring Anthropic's Claude models to Snowflake Cortex AI. Anthropic's Claude will be a part of Snowflake Intelligence and Cortex Analyst.

The partnership with Anthropic will be part of Snowflake's agentic AI strategy.

In a statement, Snowflake said Claude 3.5 models will be available with in Cortex AI, which is built on AWS. The news landed as Snowflake reported earnings. Snowflake has been building out its AI offerings.

Key points of the Anthropic partnership include:

  • Anthropic's Claude language models will be used to enhance data agents within Snowflake.
  • Snowflake's Horizon Catalog, which is integrated into Cortex AI, will provide controls and guardrails to Claude 3.5 models.
  • Snowflake is leveraging a special implementation of Amazon Bedrock to enable Anthropic's models to be used inside of Cortex AI. Anthropic's Claude 3.5 Sonnet will be available in AWS regions where Amazon Bedrock is available.
  • Snowflake has committed to using Claude as one of the core models behind its agentic AI offerings. Snowflake's Cortex Playground has multiple models available in Cortex AI.
  • Snowflake will optimize its offerings for Claude and the company will use Anthropic's flagship models internally.

For Anthropic, the Snowflake deal gives it more enterprise throughput. Anthropic has been building collaboration workflows to make Claude more of a digital coworker in enterprises.

As for earnings, Snowflake reported better-than-expected third quarter earnings. The company reported a third quarter net loss of $324.3 million, or 98 cents a share, on revenue of $942.09 million. Non-GAAP earnings in the quarter were 20 cents a share. 

Wall Street was looking for a non-GAAP profit of 15 cents a share on revenue of $898.46 million. 

Snowflake said it had 542 customers in the quarter with trailing product revenue above than $1 million. 

Sridhar Ramaswamy, CEO of Snowflake, said the company is driven to "to produce product cohesion and ease of use." He added that Snowflake is winning new business and expanding wallet share with existing customers and displacing competitors. 

As for the outlook, Snowflake projected fourth quarter product revenue of $906 million to $911 million, up 23%. For fiscal 2025, Snowflake is projecting $3.43 billion in product revenue, up 29%. 

 

Separately, Snowflake said it acquired open data integration platform Datavolo. The move is designed to add creation, management and observability of multimodal data pipelines for enterprises. 

Ramaswamy said Datavolo will bring the ability to ingest both structured and unstructured dataflows. Datavolo's platform is built on Apache NiFi, a technology for secure data processing and distribution. Snowflake has made a series of recent moves in open source. 

Constellation Research's take

Constellation Research analyst Andy Thurai said the Anthropic partnership is the headliner and a notable move for enterprises. For Snowflake, Anthropic gives it some LLM heft. 

"Based on my conversation with enterprises, Anthropic seems to be the best performing model among the LLMs to date. Anthropic, co-founded by OpenAI employees, seems to be more efficient than the parent at a much cheaper cost. As of now, Anthropic's Claude 3.5 Sonnet is one of the top performing model in the market."

Constellation Research analyst Holger Mueller said Snowflake is returned to growth mode.

"Snowflake has accelerated again, which is good news for investors, as the hunger for insights inside the enterprise is growing. But growth comes at a price. Snowflake's operating losses for the last nine months are surprisingly close to its Q3 revenue. The good news is R&D spending has overtaken sales and marketing. That investment is certainly warranted given the transformational nature of AI for analytics and insights. In the long run though, Sridhar Ramaswamy and team need to bring cost and revenue in synch. The fourth quarter would be a good start."

Snowflake focuses on cost, ease of use, time to value

Ramaswamy outlined how Snowflake is winning business. The plan revolves around Cortex AI and giving customers confidence in the technical direction. For instance, enterprises try Snowflake and find they don't need a large team to manage and deploy and consume more of the platform. 

With strong data warehousing and data engineering features, Snowflake can extend usage to Cortex AI. For Ramaswamy, the quarter was Snowflake's first big beat and raise of his short tenure.

Speaking on an earnings conference call, Ramaswamy said:

"Our product development engine continues to accelerate, as we launched the same number of tier 1 features to general availability in Q3 as we did in all of fiscal 2024. Our AI feature family Snowflake Cortex is showing significant adoption and we improved our go-to-market motion across the board and it's having a huge impact on new product adoption. We are firing on all cylinders."

Ramaswamy added that the company has become more efficient and eliminated "efforts that were underperforming." 

Snowflake is also competing on ease-of-use, said Ramaswamy. "We also consistently hear a lot of feedback that some of our competitors' technology is highly complex and requires a ton of highly expensive engineering resources. And with complexity comes risk. What is one step in Snowflake is 10 on some other platforms, that's 10 times more chances to engineer a mistake," he said. 

Other items:

  • Snowpark is roughly 3% of revenue now.
  • "Our push into interoperability and transforming data that previously would not have been addressed by Snowflake is proving to be a key differentiator with our customers. These features are now north of a $200 million run rate as of the end of Q3."
  • "We are seeing massive adoption of open data formats especially truly open formats like Apache Iceberg."
  • CFO Mike Scarpelli said that about 500 accounts have adopted Iceberg and the company has seen little friction moving them over.
  • "It is clear that AI is going to change how people consume data. Not only is AI going to make structured and unstructured data more interchangeable, it is also going to heavily influence areas like business intelligence."
  • Snowflake is pitching itself as a way to lower costs by using AI to handle more of the data processing journey.

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity snowflake AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

LogicMonitor lands $800 million investment to accelerate its observability ambitions

LogicMonitor lands $800 million investment to accelerate its observability ambitions

LogicMonitor raised $800 million in a move that's designed to accelerate its observability growth, fund mergers and acquisitions and expand into new markets and industries.

Under the terms of the deal, LogicMonitor controlling shareholder Vista Equity Partners sold a stake to a consortium of investors including PSG, Golub Capital and others. The investment values LogicMonitor at $2.4 billion including debt.

New Relic went private last year to accelerate its observability efforts in a deal valued at $6.5 billion. For reference, publicly traded observability players Datadog and Dynatrace have market caps at $45.3 billion and $15.4 billion, respectively. Splunk was acquired by Cisco for $28 billion last year.

The game plan for these observability vendors is to ride the AI wave. LogicMonitor CEO Christina Kosmowski said:

"We are a mission critical part of the AI race - in short, AI needs data centers. We are the connective tissue between AI and data center performance as we have the muscle, pedigree, and, most importantly, the data insights to advance the most important and life-altering AI initiatives."

In a blog post, Kosmowski said LogicMonitor is at an inflection point where it can scale. The investment “is a clear signal of the commitment to helping businesses unlock the full potential of AI and data center technologies—empowering them to work smarter, faster, and more responsibly,” she said.

LogicMonitor Adds AIOps Capabilities to Its Hybrid Observability Platform

The race

Observability is a hot space and LogicMonitor may need to bulk up with the additional funding to run with the larger players. LogicMonitor said that it has had a 36% compound annual growth rate in the last 5 years and has more than 2,400 customers.

For comparison, DataDog has more than 29,000 customers.

Constellation ShortList™ Observability 

Olivier Pomel, CEO of Datadog, said the company's third quarter highlighted how customers are looking to observability vendors to expand into next-gen AI. "We kept broadening our platform in observability and beyond, including in next gen AI where interest continues to rise. And we added new customers while expanding with existing ones as they grow into the cloud," said Pomel.

Pomel added:

"We are seeing initial signs of traction for our LLM observability product. Today, hundreds of customers are using LLM observability, with more exploring it every day. And some of our first paying customers have told us that they have cut the time spent investigating LLM latency errors and quality from days to hours to just minutes.

Our customers don't only want to understand the performance and cost of the LLM applications, they also want to understand LLM model performance within the context of their entire application."

Datadog reported third quarter revenue of $690 million, up 26% from a year ago, with earnings of 14 cents a share. Non-GAAP earnings of 46 cents a share topped estimates.

Cisco's Splunk purchase is transforming the company. Observability revenue in Cisco's first quarter was up 36% including Splunk.

Dynatrace CEO Rick McConnell said:

"We believe that AI-driven observability is no longer optional. Organizations are expected to find issues and resolve the incidents before they impact customers. This can't be done efficiently in complex environments through reactive dashboard monitor. Rather, organizations need to be able to trust answers from an end-to-end observability platform to action issues automatically."

Dynatrace reported earnings of 15 cents a share in its fiscal second quarter (37 cents non-GAAP) on revenue of $418 million, up 19%.

LogicMonitor's plan

With the $800 million investment, LogicMonitor outlined three goals.

  • Accelerate and expand LogicMonitor's platform including mergers and acquisitions. The observability industry could use a bit of consolidation and LogicMonitor could pick up tuck-in players.
  • Expand internationally. LogicMonitor said about 30% of its customers in 2023 were international.
  • Diversify into new industries. LogicMonitor said it has the opportunity to focus its data center observability efforts on new verticals.

Vista first invested in LogicMonitor in 2018 and has seen the company organically scale more than 650% since. It's quite possible that LogicMonitor will now look to scale with mergers and acquisitions too.

Constellation Research analyst Andy Thurai said:

"LogicMonitor, which used to play mostly in the infrastructure and networking space, has leveled up its game to higher levels of observability. In addition, the company has also acquired and continued to invest in their Dexda platform which offers AIOps capabilities. With this investment, LogicMonitor get some flexibility to grow its platform by either acquiring some players in the open telemetry (OTEL) space or strengthening their existing observability platform."

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Bridging the Gap Between Healthcare & Technology: A COO's Perspective

Bridging the Gap Between Healthcare & Technology: A COO's Perspective

 

Larry Dignan from Constellation Research sits down with Laurie Wheeler, the Chief Operating Officer of the IST division at MultiCare Health System. Laurie shares her unique perspective on driving operational efficiency and optimizing #technology within the #healthcare industry.

With 25+ years of experience at the same organization, Laurie provides a valuable viewpoint on navigating change, implementing new systems like ServiceNow and Workday, and leveraging emerging technologies like ambient recording to improve the clinician-patient experience. Hear strategies for building credibility, finding internal champions, and making technology adoption easier for healthcare workers. Discover how a seasoned COO approaches challenges such as usability, change management, and the integration of AI-powered solutions.

For anyone interested in the intersection of healthcare and #IT, this interview offers a realistic and practical look at the role of technology in healthcare operations.

On Insights <iframe width="560" height="315" src="https://www.youtube.com/embed/x7l1iJZOgdg?si=dvizIPesN2djuFiE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Walmart's big tech, AI bets paying off going into 2024 holiday shopping

Walmart's big tech, AI bets paying off going into 2024 holiday shopping

Walmart's ongoing bets in technology, AI, automation and omnichannel customer experiences are paying off as the retailer lands more share among higher-income shoppers.

The company's third quarter earnings report illustrated the art of delivering results today while transforming the company. Walmart delivered third quarter earnings of 57 cents a share, 58 cents adjusted, on revenue of $169.6 billion, up 5.5% from a year ago.

Walmart also raised its sales growth outlook for fiscal 2025 to 4.8% to 5.1%. In February, Walmart was projecting fiscal 2025 sales growth between 3% and 4%. The retailer also raised its non-GAAP earnings target to $2.42 a share to $2.47 a share.

By revamping its sales mix and squeezing costs while keeping prices low, Walmart has been able to grab wallet share. In the third quarter, Walmart's e-commerce revenue was up 27%, advertising sales were up 28% and membership income gained 22%.

"The rapid growth from newer businesses is helping us strengthen our business model," said Walmart CEO Doug McMillon. "Households earning more than $100,000 made up 75% of our share gains. In the U.S., in-store volumes grew, curbside pickup grew faster, and delivery sales grew even faster than that."

Walmart's performance comes as the company is seeing margin pressure from GLP-1 drugs and weathered a US port strike, two hurricanes and flooding. Inventory is in good shape, said McMillon.

Here's a look at some the technology investments that are paying off for Walmart.

Tech talent. "We build tech more effectively than we used to, and we're doing it with more speed," said McMillon.

Scan & Go and computer vision checkouts. Sam's Club's Scan & Go app is driving throughput at Sam's Club. Walmart is likely utilizing Scan & Go at Sam's Club locations before expanding.

CFO John David Rainey said:

"Scan & Go penetration of sales increased more than 250 basis points and the nearly completed rollout of our Just Go exit technology across all 600 clubs is enabling about 70% of members to exit without a check. Members love it with member satisfaction scores on exit now close to 90."

International best practices. Half of Walmart's sales in China are digital and it can provide 1 hour delivery service. McMillon also said that the company has learned from social commerce in China as well as India, which has disruptive fintech at scale.

Generative AI. Walmart continues to advance its genAI efforts to deliver what McMillon called "practical opportunities right in front of us." He said:

"Our datasets are valuable and we're learning to put them to work to improve the customer member experience and assist our associates as they do their daily work. I'm excited about how (generative AI) will improve the customer experience in the months and years to come, enabling us to provide a better experience than the one that starts by typing into a search bar and getting a list of results to choose from. We're racing to improve all the things that people love about shopping and remove or diminish all the things they don't."

GenAI is also removing friction for employees, said McMillon.

Automation. Rainey said that more than half of Walmart's fulfillment center volume is automated, twice as much as last year. "This has the obvious benefit of lowering the per unit cost of delivery. These factors contributed to the third consecutive quarter of approximately 40% reduction in U.S. net delivery cost per order," he said.

Omnichannel experiences. Rainey noted that Walmart is gaining higher-income shoppers in part because of the company's focus on omnichannel retailing. Rainey said:

"We talk about the different ways that we can serve consumers and how that's different from say, a decade ago or even five years ago. As we've become omni, we have the ability to sell customers in the store or at the curb, deliver to their home and we can do that whenever and however they want. We want it to be a great price and we want it to be convenient and we can do both at the same time."

From here, McMillon said Walmart will continue to balance growth, profit and investments in people and technology as well as automation, but it's real time conversation.

Rainey added:

"We feel like we're striking the right balance between profit expansion and investment in the business. We're all very focused on making sure that we are healthy for the next generation. We certainly provide an outlook over the next three to five years, but we want to continue to have the same type of financial performance after that and that requires a level of investment in the business."

Data to Decisions Innovation & Product-led Growth Marketing Transformation Next-Generation Customer Experience Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Hitachi Vantara expands AI infrastructure footprint

Hitachi Vantara expands AI infrastructure footprint

Hitachi Vantara in recent weeks has built out its AI infrastructure offerings with the aim of expanding its market share and genAI footprint in data centers.

The company said that its Hitachi iQ platform is now available for Nvidia HGX systems. Hitachi iQ coupled with Nvidia HGX provides tailored systems for multiple industries and use cases including inferencing, large language models, model training, analytics and digital twins.

According to Hitachi Vantara, Hitachi iQ with Nvidia HGX combines storage, networking and servers with Nvidia H100 and H200 Tensor Core GPUs along with Nvidia AI Enterprise. Hitachi iQ became generally available in July.

Hitachi iQ with Nvidia HGX offers enhanced data processing of digital file systems, zero copy architecture, an updated Hitachi Content Software for File platform that combines the latest AMD EPYC servers with Nvidia InfiniBand or Ethernet networking.

Last week, Hitachi Vantara announced a new quad level cell (QLC) flash storage array with public cloud replication and an object storage appliance as part of its Virtual Storage Platform One platform. The new systems are designed for AI and analytics workloads.

Key points include:

  • Hitachi Vantara offers dual port QLC media to deliver data access if a hardware failure occurs.
  • QLC flash storage has more density and lower power consumption than traditional systems.
  • Virtual Storage Platform One Block is the QLC flash storage array with public cloud replication.
  • Hitachi Vantara is using Samsung's dual ported 30TB QLC media on VSP One Block.
  • Virtual Storage Platform One Object is a storage appliance that has multi-node configurations for industries like media, healthcare and finance.
  • The QLC flash options complement Virtual Storage Platform One SDS Cloud, which has seamless replication from on-premises to AWS.

To round out its AI efforts, Hitachi Vantara outlined a partnership with Hammerspace to data observability, integration and workload tools with AI infrastructure.

Under the partnership, Hammerspace technology will be used in Hitachi Vantara's converged systems for AI workloads as well as Hitachi iQ.

The companies said Hammerspace's data orchestration software will enable Hitachi Vantara to unify data access, provide data for in-place AI or consolidate data into a central data lake.

More:

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

IBM Redefines the 'Science of Consulting with Generative AI

IBM Redefines the 'Science of Consulting with Generative AI

R "Ray" Wang recently sat down with Mohamad Ali of IBM Consulting, to discuss the transformative power of #generativeAI in the consulting industry...

Ali shared how IBM is redefining the 'Science of Consulting' by seamlessly integrating #technology like #LLMs and #digital workers into their end-to-end service offerings. From driving efficiency and cost savings to unlocking new #business models and capabilities, this conversation offers a glimpse into the future of the #consulting profession. 🔮

Learn more about #AI-powered consulting becoming a reality. ??

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/iqVwtyhIKj0?si=6DGfCa8PiOgLwtAA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

API platform Kong raises $175 million in venture funding

API platform Kong raises $175 million in venture funding

Kong, a startup focused on cloud API technologies, raised $175 million Series E financing at a valuation of $2 billion.

The round was led by Tiger Global and Balderton and included additional investment from Andreesen Horowitz, Index Ventures and others. Kong has raised $345 million.

According to Kong CEO Augusto Marietti, the funding will help the company build out its API platform to manage, secure and observe internal and external APIs and expand sales operations. Kong is looking to capitalize on a surge in API call demand largely due to cloud connections to large language models.

Constellation ShortList™ API Management (APIM)

Constellation Research analyst Holger Mueller said:

"It's good to see funding going to vendors who can bridge the need of AI for data and process answers. In the 21st century these answers should come from APIs. These capabilities will be critical to power the next wave of genAI, which will be able to look into transaction data and APIs for better automation of next generation applications."

Kong provides an API platform so enterprises in multiple industries can build applications and centralize security, governance and visibility. The company's products include Kong Cloud Gateway, AI Gateway, Mesh, Gateway Manager and Insomnia, a development tool.

Tech Optimization Chief Information Officer

Microsoft launches Copilot Actions, Agents in SharePoint at Ignite

Microsoft launches Copilot Actions, Agents in SharePoint at Ignite

Microsoft rolled out more than 80 new products and features to round out its Copilot and artificial intelligence stack as it moved to position itself as an early enterprise AI leader and a platform that can provide model choices.

At Ignite in Chicago, Microsoft made the case that enterprises are betting on its platform. The company noted that about 70% of the Fortune 500 use Microsoft365 Copilot and cited Blackrock as a company that has consolidated on Microsoft Azure.

The flurry of announcements from Ignite came just a few days before Amazon Web Services' re:Invent conference. Simply put, Microsoft is using Ignite to front-run its vision of generative AI and agentic AI orchestration as AWS is likely to hit similar themes.

At a high level, Microsoft is reinforcing recent themes. For instance, Microsoft wants every employee to leverage Copilot as a personal assistant with AI agents being rolled out to automate business processes. These agents would be designed and built in Copilot Studio to automate processes. Microsoft said enterprises will have multiple agents to orchestrate.

During the Ignite keynote, Microsoft CEO Satya Nadella aimed to position the company's army of agents and Copilots as productivity enhancers that drive returns. 

Returns on investment--whether it was agents, performance, hardware and cloud--was a common theme for Nadella. Microsoft launched Copilot Analytics to track returns on agents and copilots. 

"After users start using copilot and all these agents, one of the fundamental things that all business leaders want to do is to figure out and measure ROI," said Nadella.

Here's a look at some of the more interesting Ignite announcements:

  • Copilot Actions. Copilot Actions are designed to enable employees to automate everyday tasks like getting a summary of meetings, return from vacation emails and summaries across apps.
  • Agents in SharePoint. Microsoft said SharePoint will get an agent that can ground information in corporate content. Users can customize agents that are focused on specific SharePoint assets.
  • Turnkey agents. Microsoft outlined agents that are in various stages of preview. Interpreter offers real-time translation. Employee Self-Service Agent is in private preview in Business Chat and can answer most common questions and actions in HR and IT tasks.
  • Azure AI Foundry. Azure AI Foundry is designed to give companies a common platform to create, customize and manage AI apps. Azure AI Foundry includes all Azure AI services and tooling with previews of Azure AI Foundry ADK, Azure AI Foundry portal and Azure AI Foundry Agent Service.
  • Model choices with 1,800 in Azure AI Foundry with experiment tools so customers can choose the best large language models. 
  • Windows 365 Link. Microsoft launched a device that's connected to Windows 365 and will be available in April for $349. Windows 365 Link is a spin on the thin client and doesn't have local data or apps.
  • Microsoft Fabric enhancements. The company launched the preview of Fabric Databases, which includes SQL to create a unified data platform that can leverage transactional and analytic data. Customers will be able to automatically replicate apps to OneLake and autoscale databases.

In addition, Microsoft Fabric will get Open Mirroring, a capability that can bring any app, data provider or data store to OneLake. Microsoft also said OneLake catalog is also generally available.

Constellation Research's take

Constellation Research analyst Holger Mueller said:

Microsoft pushes ahead with AI, adding agentic capabiliies - and even showing a software demo at the IT centric Ignite. Microsoft was able to 'protect' the Copilot franchise by adding agentic capabilities - the challenge is that OpenAI models lack in performance - as the conversational customer service demos showed (even in demo mode). The way to hide and bundle complexity is via the Application Server and the applicaton server is back with Azure Ai Foundry. It was good to see the variety of Azure compute coming - and the focus on pushing Azure Arc further - to the edge. Finally, things are happening in quantum and MIcrosoft is going down the logic qubit route by partnering with Quantinuum and Atom Computing." 

Constellation Research analyst Doug Henschen said the Fabric news may be notable for enterprises. Henschen said:

"Fabric was previously focused entirely on analytical use cases, but with the release of Fabric Databases and, specifically, the SQL database in Fabric Option, the platform now supports the development of operational (a.k.a. transactional) applications on the same platform. The tie between operational and analytical is further cemented by automated replication from SQL database into OneLake and the new Open Mirroring option, whereby data from any app or data source can be mirrored with low-latency, change data capture capabilities, whereby only the changes in data are continuously replicated to OneLake.

These options are very much like the “No ETL” ties AWS has introduced between database services such as Amazon Aurora and Amazon Redshift. Nonetheless, the addition of an operational database option and support for CDC replication rounds out Fabric as a platform for data-driven applications as well as data-driven insights."

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity Microsoft AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Nvidia outlines Google Quantum AI partnership, Foxconn deal

Nvidia outlines Google Quantum AI partnership, Foxconn deal

Nvidia said that it is working with Google Quantum AI to design quantum computing AI processors. The quantum computing partnership with Google was part of a series of Nvidia announcements at the SC24 conference in Atlanta.

According to Nvidia, Google Quantum AI is using the Nvidia Cuda-Q platform to simulate designs. Google Quantum AI uses Cuda-Q, a hybrid quantum-classical computing platform and the Nvidia Eos supercomputer to simulate the physics of its quantum processors.

Hybrid quantum computing is moving to the forefront since there's the potential to solve complex commercial problems sooner. Specifically, Google Quantum AI generates simulations based on the 1,024 Nvidia H100 Tensor Core GPUs in the Nvidia Eos supercomputer.

The quantum partnership with Google was one of the main items outlined during Nvidia CEO Jensen Huang's keynote. Huang highlighted AI applications for science including drug discovery, climate forecasting and quantum computing.

"AI will accelerate scientific discovery, transforming industries and revolutionizing every one of the world’s $100 trillion markets," said Huang.

In addition, Nvidia said it is scaling production via a partnership with Foxconn. The company also announced the general availability of the Nvidia H200 NVL, a PCIe GPU based on the Nvidia Hopper architecture for low-power, air-cooled data centers.

Other Nvidia items at SC24 include:

  • CorrDiff NIM and FourCastNet NIM, two new microservices for climate change modeling and simulation on the Earth-2 platform. The Earth-2 platform is a digital twin for simulating weather and climate conditions.
  • cuPyNumeric library, which uses GPUs to accelerate NumPy for applications in data science, machine learning and numerical computing.
  • Nvidia launched the Nvidia Omniverse Blueprint for real-time development of digital twins.
  • Nvidia highlighted its open-source BioNeMo Framework, which is used for drug discovery. The company also launched DiffDock 2.0, which is a tool for predicting how drugs bind to target proteins.
  • The company also highlighted the Nvidia Alchemi NIM microservice which couples generative AI to chemistry.

More on Nvidia:

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

GenAI's 2025 disconnect: The buildout, business value, user adoption and CxOs

GenAI's 2025 disconnect: The buildout, business value, user adoption and CxOs

The gap between generative AI's "build it and they will come" folks and the enterprises looking for actual business value may be widening. The next year will be interesting for generative AI trickledown economics.

First, let's touch on the boom market.

  • Nvidia will report third quarter earnings Nov. 20 and rest assured it'll be the most important report of the year (again). Funny how we say that about Nvidia every quarter. Demand will remain off the charts and the hyperscalers and countries are begging to spend billions on Nvidia's AI accelerators. Analysts are looking for non-GAAP third quarter earnings 74 cents a share on revenue of $32.94 billion, up from $18.12 billion a year ago.
  • CoreWeave, the AI hyperscale cloud provider, just closed a minority investment round of $650 million led by Jane Street, Magnetar, Fidelity and Macquarie Capital, with additional participation from Cisco Investments, Pure Storage, BlackRock, Coatue, Neuberger Berman and others. CoreWeave is valued at about $23 billion, according to Reuters.
  • Softbank Corp. will be among the first to build out an AI supercomputer using Nvidia's Blackwell platform. Softbank will get Nvidia's first Nvidia DGX B200 systems with plans to build out a Nvidia DGX SuperPOD supercomputer. Softbank floated debt to be first in line.

And on it goes. You could round up the AI building boom weekly. All of the hyperscalers are spending big on AI factories. Microsoft, Amazon, Meta and Alphabet all said capital spending on AI will continue to surge.

Microsoft CFO Amy Hood said:

"Roughly half of our cloud and AI-related spend continues to be for long-lived assets that will support monetization over the next 15 years and beyond. The remaining cloud and AI spend is primarily for servers, both CPUs and GPUs, to serve customers based on demand signals.”

I couldn't help but think of what Oracle CTO Larry Ellison said on the company's most recent earnings call. Ellison said: "I went out to dinner with Jensen (Huang) and Elon (Musk) at Nobu in Palo Alto. I would describe the dinner as begging Jensen for GPUs. Please take our money. In fact, take more of it. You're not taking enough of it. It went well. The demand for GPUs and the desire to be first is a big deal."

We know the drill. The tech industry is betting that the real risk is not plowing billions (if not trillions) on the AI buildout. I'd recommend reading a contrarian argument about irrational AI data center exuberance. but your guess is as good as mine on the timing.

The disconnect between the AI buildout side and the business value side may be widening. 2025 is going to be a year of business value following 2024, which was about genAI production following proof of concepts in 2023. Yes, enterprises are going to need real gen-AI budgets in 2025, change management and returns.

Bottom line: The trickledown economics of generative AI at the beginning of 2024 hasn’t exactly trickled down beyond the infrastructure layer.

Can vendors monetize genAI value?

For real genAI value to occur, the application layer will need to be built out. LLM players, think Anthropic and OpenAI, are going to need apps to go with their models. Vendor monetization models are a bit fuzzy now. ServiceNow is clearly benefiting, but other software companies are seeing mixed results. Here's a sampling of recent comments and there will be a bunch more as SaaS earnings season kicks off soon.

Monday CEO Eran Zinman:

"Total AI actions grew more than 250% in Q3 compared to Q2. And the AI blocks grew 150% from Q2. So overall, we see more and more customers adopt those blocks, people incorporate them into their automation. They create a lot of processes within the product that involves AI within that. And over time, we are planning to roll out the monetization tied with AI, where we're going to generate clear and efficient value for our customers."

Zinman was then asked whether 2025 will be the year for AI monetization. "We don't have a specific date, but it might be in 2025," said Zinman. "But we can't commit to that."

Translation: Monday needs to show value to get the money.

Hood danced around monetization, but did say that analysts need to think about the long game.

"We remain focused on strategically investing in the long-term opportunities that we believe drive shareholder value. Monetization from these investments continues to grow, and we're excited that only 2.5 years in, our AI business is on track to surpass $10 billion of annual revenue run rate in Q2. This will be the fastest business in our history to reach this milestone."

Infosys CEO Salil Parekh said:

"Any of the large deals that we’re looking at, there’s a generative AI component to it. Now, is it driving the large deals? Not in itself, but it’s very much a part of that large deal."

SAP CEO Christian Klein said Klein said about 30% of SAP's cloud orders included AI use cases.

Simply put, if you follow the money you'd trip once you got past the infrastructure layer and to the applications. Enterprise software vendors haven’t figured out what customers will pay for.

The beginning of genAI user fatigue?

What about the users? Well, that genAI love affair has become tiresome too.

A Slack survey found that generative AI adoption among desk workers went from 20% in September 2023 to 32% in March 2024 and then hit a wall. Today, 33% of desk workers are using generative AI, according to Slack. The survey also found that excitement around AI is cooling and drop from 47% to 41% from March to August. Slack cited uncertainty, hype and lack of AI training for the decreases. Another possible reason I'll throw in: There's a copilot sprawl that's adding to costs for the enterprise and distraction for the worker.

This genAI adoption from employees is a bit chicken or egg. If enterprises balk at spending on various copilots, they're going to limit access. Or there's just not enough value for employees yet. I can't tell you how many times I've tuned out Microsoft Copilot, Google Gemini and other overly helpful AI. Dear LLM, if you're useful I'll reach out. Until then don't annoy me.

It’s on you CxOs

These moving parts—genAI to agentic AI, FOMO, data strategies, vendor promises and change management—are going to be challenging to navigate for CxOs. I mined my recorded conversations in 2024 to surface common AI themes from CxOs. Here’s a look:

GenAI is a tool instead of a magic bullet. CxOs are looking to integrate AI into processes and workflows and use the technology as an excuse to revamp them. Agentic AI is promising, but full automation will require orchestration and process groundwork.

Change management is everything. Change management has turned up in multiple conversations throughout 2024. Implementing AI is really about transforming how people work and interact with the technology. GenAI also will create organizational challenges. CxOs also need strong change management approaches to address employee fears about job displacement.

Governance and control matters. Governance is becoming a key theme as genAI projects move to production.

Data strategy. Enterprises are still working on their ground games when it comes to data. That work will continue for many in 2025.

Costs. Enterprises will begin to focus more on cost of compute, open source models, small language models and the use cases that drive the most value for the money. There will be some hard conversations between enterprises and software vendors.

Training. Everyone is talking about upskilling and training, but it is unclear whether this education is happening.

Iterative implementation. The mantra with genAI has been to "just get started," but that approach has led to AI debt already with copilot sprawl, difficulty changing models and user dissatisfaction. Will 2025 be better for the slower movers that spent 2024 refining the data strategy?

Employee-AI collaboration. CxOs are trying to solve for human-in-the-loop approaches so employees feel more empowered working with AI.

This genAI cognitive dissonance is worth watching in 2025. Either the value reaches vendors and enterprises or we're going to have a massive AI buildout hangover.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer