Results

Boomi acquires Rivery, adds data integration tools to iPaaS stack

Boomi acquires Rivery, adds data integration tools to iPaaS stack

Boomi has acquired data integration vendor Rivery as the company continues to build out its integration platform as a service (iPaaS) strategy.

Terms of the deal weren't disclosed.

Rivery offers Change Data Capture (CDC) technology that makes moving data more efficient as well as low-code extract, load and transform (ELT) features. Boomi's plan is to combine Rivery and its data team with its integration, automation, API management and data management tools.

According to Boomi CEO Steve Lucas, Rivery aligns with its "strategy to deliver seamless, powerful tools that transform data into a strategic asset."

Constellation Research analyst Doug Henschen noted that the Rivery purchase addresses "seamless integration of data across silos."

In the big picture, Boomi is looking to stack iPaaS, API management and data integration in one automation platform that includes Boomi AI Agents.

Boomi has been building out its platform organically and via acquisition. Boomi has also forged partnerships with SAP and ServiceNow to expand its reach. Recent developments include:

Here's a look at the Rivery architecture followed by Boomi's stack.

Data to Decisions Tech Optimization Innovation & Product-led Growth boomi ipaas Chief Information Officer

Agentic AI: Three themes to watch for 2025

Agentic AI: Three themes to watch for 2025

Google Cloud launched Agentspace, Salesforce is prepping Agentforce 2.0 and Amazon Web Services wants Amazon Bedrock to be your agentic AI orchestration engine. And that's just two weeks of news.

Simply put, the agentic AI platforms are launching at a rapid clip. AI agents will be a big theme for 2025, but open questions abound.

Sign up for the Constellation Insights newsletter

Here are three things to watch in agentic AI. Like generative AI, CxOs are going to have to weigh vendor pitches against lock-in and architectural decisions to regret.

Horizontal approaches vs. platform specific

Agentic AI launches have been proliferating in recent weeks ever since Salesforce's Agentforce launch at Dreamforce. ServiceNow is deploying agents across its Now Platform. And every vendor from SAP to Microsoft is talking agents across their platforms.

Naturally, many of these agentic AI visions resemble decks that place the vendor at the center of the universe and a bunch of connectors to other platforms.

For customers, however, the incoming AI agent sprawl is going to be an issue. Will enterprises simply wind up managing different buckets of AI agents? SAP Joule agents are over here. Salesforce Agentforce actions there. Google Cloud agents operating over the top. You get the idea.

The big question for 2025 is whether enterprises will choose to go horizontal with AI agent orchestration and management. Hyperscalers may be in the best position here. Amazon Bedrock will have agent orchestration. Amazon Q will operate across applications and data silos. Google Cloud Agentspace will take a similar approach. Microsoft Azure will also orchestrate your agents, but it's unclear whether tech buyers will see the cloud giant as more about its own platform or a horizontal play.

UiPath is a player that is betting on a Switzerland approach. CEO Daniel Dines explained where the company stands on UiPath's third quarter earnings call. UiPath launched Agent Builder as it makes agents a first class citizen along with its robotic process automation. Dines explained the difference between horizontal plays and application specific offerings.

He said:

"Our agents and robots work across all applications, both new and legacy, eliminating vendor lock-in for our customers. Business processes today don't run on a single system. We will continue to be the Switzerland of business applications and agents, providing equal access to third-party systems. Agentic orchestration will be the governor across agents, robots, people, and models. We will democratize access to agents by leveraging our low-code platform that is already known to automation developers."

Raj Pai, Vice President, Product Management at Google Cloud AI, had a similar take.

"We see agents--software systems that use AI to complete tasks--as the biggest opportunity to meet this unmet promise of genAI and really drive employee productivity and ingenuity," said Pai. "AI agents across all enterprise data sources can complete tasks that require planning, research, content generation and actions."

In other words, this pole position in agentic AI is going to require a horizontal approach--and hundreds if not thousands of connectors.

ServiceNow CFO Gina Mastantuono, speaking at a Barclays investment conference, said the horizontal approach and ability to be neutral is critical to playing in the agentic AI and genAI space. "We can sit on top of the application sprawl or data sprawl and really drive productivity. And the differentiation that we have in that it's one platform across the enterprise, whether you're working in HR, IT, customer service, legal, finance," she said.

AWS CTO Werner Vogels said in a re:Invent keynote that agents will manage more narrow-focused agents to automate workflows. Vogels gave an example of how AWS is resolving tickets with agents. "The goal of the agent is to efficiently resolve support tickets, and the process then becomes, read the ticket, use the tools iterate. It categorizes, prioritizes the tickets based on analysis, and it determines the action. It could be to resolve automatically, or it could be to escalate for human review," said Vogels.

Wild cards include:

  • Will a vendor like Boomi make headway with a registry to manage AI agents?
  • Are hyperscalers the de facto favorite to manage agentic AI?
  • Can platform-specific vendors such as Salesforce with MuleSoft and ServiceNow leverage their breadth of connectors to become horizontal plays?

Pricing for agentic AI

Box CEO Aaron Levie on LinkedIn noted the great AI agent debate. He said that one approach is to price AI agents like traditional work at a steep discount. You'd pay for amount of time or units to do the work. Another approach is pricing agents on a per-outcome basis. This per outcome basis rhymes with a consumption model, say $2 per resolved conversation.

Levie added that pricing agents as close to the AI costs as possible could be great for customers, but doing shareholder returns. And finally, vendors could stick to a SaaS seat subscription model and offer agents to users that do unlimited work.

"Lots of different approaches - and probably many more than the above - but fairly exciting times to watch new business models in software emerge after a decade plus of limited changed," said Levie.

Indeed, Levie's post generated more than 120 comments with some noting outcome-based pricing will result in complicated contracts and too much cost variability. Others seemed to think agents will be free.

In this pricing debate, hyperscale cloud providers may have the edge because customers are used to consumption-based models. For instance, Google Cloud is pricing Agentspace on a per-user basis. Agentspace Enterprise Plus includes the ability to ask agents follow-up questions, carry out actions, create agents and use the research agent. AWS pricing for agent orchestration will ride along with Amazon Bedrock. Keep in mind that AWS will profit from storage, data and compute consumption.

Outcome-based pricing is going to be a tough sell since most CxOs don't want to share a cut of value created. After all, vendors don't give you back dough when value is destroyed.

Salesforce CEO Marc Benioff said consumption-based pricing will likely win out. Speaking about Agentforce, he said "this is a consumption product. We have per-user products, which are for humans. We have consumption products for agents and robots and that's how I think we should look at it, which is the comparative costs is there's just no comparison."

Brian Millham, Salesforce's Chief Operating Officer, said customers understand the cost of labor and when they can deploy agents to manage interactions with customers. Salesforce’s $2 per conversation approach makes sense. When Salesforce launches Agentforce 2.0 we may get more clarity on pricing as well as whether the company aims to be an agent orchestrator across all enterprise applications.

How ServiceNow is thinking about pricing and why hybrid approaches are likely

Pricing is a critical topic in enterprise software--especially with genAI and now agentic AI. ServiceNow's Mastantuono was asked about the company's approach to pricing. She spoke about this pricing strategy before, but it has evolved with genAI.

ServiceNow has launched its Pro Plus SKUs and Now Assist is the genAI flavor. Mastantuono said she thinks there's a hybrid pricing model that's going to play out with AI agents that includes seats and then consumption. Outcome-based pricing is possible too over time.

She said: "We have a hybrid model. It's seat based and you get a certain number of tokens. And then, if you're over-consuming, you need to buy more tokens. This is the way that we have initially thought about monetization."

Mastantuono added that customers like the switch to consumption based because they can more easily forecast (and so can ServiceNow).

With outcome-based pricing, Mastantuono said the conversation appears every few years in enterprise software, but the issue is providing customers with cost predictability. ServiceNow's approach has been to add a pricing uplift of 30% as innovation and value is added. She said: "It's all about value. We're a value seller play and the conversations with customers have gone really well when we can say the lion's share of the value is going to you and we're taking a slice of it because we're investing in all the innovation that's allowing that value."

Wild cards for pricing include:

  • Are enterprises really going to accept more operating expense volatility with AI agent consumption based models?
  • Will the labor for AI agent argument hold?
  • Can a model like value-based contracts work when AWS Marketplace is standardizing contracts instead of making them more complicated?

Are all these agents going to communicate and negotiate?

There is nothing that puts me to sleep faster than talking about technology standards and the committees and meetings that go with them. Standards are necessary, but the topic is a yawner.

However, if we really want this agentic AI dream to play out we're going to need some standards so these agents can communicate and negotiate to get things done.

What happens when an SAP Joule AI agent runs into an Agentforce agent and has to negotiate with a Google Cloud Agentspace creation? Sure, they can connect, but the future will require real communication--like those good ol' humans do.

Today, there isn't one universally adopted standard for agentic AI communications across platforms. IEEE has a few working groups and there are other standards orgs on the case. It’s likely that agentic AI standards will emerge in the years ahead. Challenges include:

  • Agent architecture. Agents will be built on multiple platforms and languages.
  • Security. How will secure communication channels be managed.
  • Performance. Agents will be communicating at scale with multiple agents.
  • Industry-specific standards.

Wild cards include:

  • Standards move at a glacial pace--unlike AI.
  • Multi-cloud is still a work in progress so it's unclear whether vendors will cooperate.
Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Google Cloud launches Agentspace to create, deploy agents

Google Cloud launches Agentspace to create, deploy agents

Google Cloud launched Agentspace, which is designed to create and deploy AI agents, and a version of NotebookLM for the enterprise.

With an enterprise version of NotebookLM, Google Cloud is adding enterprise privacy, security and compliance, identity support and grounding in enterprise data sources.

But Agentspace is the headliner as hyperscale cloud providers and other vendors such as UiPath are looking to create and automate AI agents. The topic has bubbled up throughout 2024, but Salesforce's Agentforce launch solidified the company. Agentic AI is pitched as a way to automate work and complement (if not replace) labor. AWS' Amazon Q and Bedrock, Salesforce Agentforce, Google Cloud, and Microsoft Azure all have their agentic AI visions.

With Agentspace Google Cloud is looking to address multiple roles and use cases across sales and marketing, HR and software development. Google Cloud said that multiple enterprises have used Agentspace in the trusted tester program including Deloitte, EU retailer Decathlon and Nokia.

Raj Pai, Vice President, Product Management at Google Cloud AI, said enterprises focused on implementing generative AI in 2024, but the promise has been unfulfilled. "We see agents--software systems that use AI to complete tasks--as the biggest opportunity to meet this unmet promise of genAI and really drive employee productivity and ingenuity," said Pai. "AI agents across all enterprise data sources can complete tasks that require planning, research, content generation and actions."

Key use cases include:

  • Sales and marketing agents include summarization of target companies and competitive offerings, customer pitch prep, go-to-market strategies, personalized marketing, analysis of performance data and feature requests.
  • Software team use cases include automating engineering processes and tasks, identifying and fixing bugs and optimizing code bases.
  • For HR, use cases include an assistant for admin tasks, performance review writing and meeting scheduling.

These assistants are enabled by dozens of connectors to Google's suite of applications as well as Salesforce, Box, Jira, Microsoft SharePoint, OneDrive and Office apps, ServiceNow, GitHub, HubSpot, Confluence and other applications.

According to Google Cloud, Agentspace will be differentiated with its search capabilities for retrieval augmented generation (RAG) as well as Google's Gemini family of models. Google launched Gemini 2.0 Flash earlier this week.

Pai said Agentspace is meant to be "the launch point for enterprise AI agents that apply generative AI contextually to your enterprise data." Pai added that Google search enables its agents as much as the models underneath. Agentspace also includes prebuilt agents and the ability to create them and be integrated into NotebookLM.

Google Cloud also has orchestration tools via support for LangChain.

"Our vision with Agentspace is to go across use cases and systems to be a one-stop destination for agents in the enterprise," said Pai.

Agentspace will come in three editions:

  • NotebookLM for Enterprise starts at $9 per user per month and includes an enterprise version of NotebookLM Plus with the same interface as the consumer edition, a setup with no connectors, support for Google and non-Google identity, Sec4 compliance and cloud terms of service.
  • Agentspace Enterprise is $25 per user per month and includes blended search across all enterprise applications, summarization, citations, people and multimodal search as well as NotebookLM for Enterprise.
  • Agentspace Enterprise Plus is $45 per user per month including the ability to ask follow up questions, carry out actions in first- and third-party applications, upload content and QA, create agents and research agents. NotebookLM for Enterprise is also included.
Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Broadcom CEO Tan takes VMware victory lap: Will he go shopping again?

Broadcom CEO Tan takes VMware victory lap: Will he go shopping again?

Broadcom CEO Hock Tan said the integration of VMware is "largely complete" with revenue growth and operating margins hitting 70% exiting 2024.

"We are well on the path to delivering incremental adjusted EBITDA at a level that significantly exceeds the $8.5 billion we communicated when we announced the deal," said Tan, speaking on Broadcom's fourth quarter earnings call. "This is much earlier than our initial target of three years."

Tan noted that although "many deals slipped into the first quarter for VMware, the company brought in 21 million total CPU cores in the fourth quarter, up from 19 million a year ago. "Of these cores, about 70% represent VMware Cloud Foundation (VCF)," said Tan.

The questions for Broadcom primarily revolved around its AI processor and networking business and whether the company would go shopping again now that it has digested VMware. However, the disconnect between enterprise buyers who are at least pondering moves away from VMware and Broadcom's financials has been jarring through 2024.

Broadcom said annual booking value (ABV) for VMware was $2.7 billion in the fourth quarter, up $2.5 billion in the third quarter. Tan projected VMware ABV to top $3 billion in the first quarter.

"Since closing the acquisition, just over a year ago, we signed up over 4,500 of our largest 10,000 customers for VCF," said Tan. "VCF enabled customers deploy private cloud environments on prem as an alternative to running their applications in the public cloud."

Tan added that Broadcom continued to cut VMware spending. In the fourth quarter, VMware expenses were $1.2 billion, down from $1.3 billion in the third quarter. VMWare spending was averaging $2.4 billion per quarter prior to the acquisition with operating margins below 30%.

"The naysayers of the VMware acquisition need to tip their hat to Tan, who integrated the company in 12 months. And while everybody is chasing VMware customers, the skeptics will have to learn that re-certification of containers is an expensive business for enterprises," said Constellation Research analyst Holger Mueller. "People keep complaining about VMware, but it hasn't seen major defections yet. Meanwhile, Broadcom may increase discounts if the customer attrition becomes painful."

Other key items:

  • VMware's fourth quarter operating income was $8.8 billion, up 53% from a year ago.
  • Free cash flow as a percentage of revenue in the fourth quarter fell from a year ago due to interest expenses and debt related to the VMware purchase.
  • VMware profitability results won't be broken out going forward.

The other question for Tan was whether Broadcom would go shopping again. Tan said the focus will be on paying down debt but added that cash will be parked on the balance center in case there's "the opportunity to buy someone else." Tan said:

"We are always open (to acquisitions) because it has been a core part of our strategy for the last 10 years. We're always interested in adding to our portfolio with a very good franchise asset in semiconductors or infrastructure software as long they meet the fairly demanding criteria we look for."

Tech Optimization vmware Chief Information Officer

Anthropic outlines most popular Claude use cases

Anthropic outlines most popular Claude use cases

Anthropic's Claude models are all business as use cases are led by web and mobile application development assistant, content creation, academic research and writing, career development, optimizing AI and business strategy.

Those use cases were outlined by Anthropic using its Claude insights and observations (Clio) system, which was highlighted by Platformer. Clio is Anthropic's attempt to understand AI model use and spot potential security risks. Clio is Anthropic's Google Trends.

In a blog post, Anthropic said it preserves privacy of conversations by abstracting them into categories and clusters. All user data is anonymized and aggregated.

Here's a look at the core use cases for Anthropic's models.

Web and mobile app development account for 10.4% of use cases with 9.2% focused on content creation and communication.

It's safe to say Anthropic is positioned for enterprise use cases, but the company noted smaller Claude customers focused on dream interpretation, disaster preparedness, crossword puzzle hints and Dungeons & Dragons.

In a research paper, Anthropic also noted that usage varies by language. Spanish users are focused on economics, child health and environmental conservation. Chinese users are focused on writing crime, thriller and mystery fiction and elderly care. For Japanese, Claude usage revolves around anime and manga, economics and elderly care.

As for the system design, here's a look at how Clio is architected for analysts.

 

 

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Broadcom continues to ride AI infrastructure wave with strong Q4

Broadcom continues to ride AI infrastructure wave with strong Q4

Broadcom reported better-than-expected fourth quarter earnings as its AI revenue continued to surge.

The company reported fourth quarter net income of $4.32 billion, or 90 cents a share, on revenue of $14.05 billion. Non-GAAP earnings for the quarter was $1.42 a share.

Wall Street was expecting Broadcom to report non-GAAP earnings of $1.39 a share on revenue of $14.06 billion.

As for the outlook, Broadcom projected first quarter revenue of $14.6 billion, up 22% from a year ago. Broadcom also raised its quarterly stock dividend by 11% to 59 cents a share in fiscal 2025.

Broadcom CEO Tan takes VMware victory lap: Will he go shopping again? | Rivals up pressure on VMware for enterprise migrations

Fourth quarter semiconductor revenue of $8.23 billion, was up 59% from a year ago. Infrastructure software, which is dominated by VMware, was $5.82 billion, up 196% from a year ago due to the VMware acquisition.

For fiscal 2024, Broadcom reported revenue of $51.57 billion, up 44% from a year ago, with net income of $5.89 billion, down from $14.08 billion a year ago.

CEO Hock Tan said semiconductor revenue hit a record $30.1 billion in fiscal 2024 with AI revenue of $12.2 billion, up 220% from a year ago. Tan said AI revenue "was driven by our AI XPUs and Ethernet networking portfolio."

In the fourth quarter, software accounted for 41% of total revenue. A year ago, software was 21% of Broadcom's revenue.

Constellation Research analyst Holger Mueller said:

"Broadcom had a very good quarter year over year thanks to the popularity of its AI chips and the consolidation of the VMware business. And while the AI business will keep growing for Broadcom, which according to CEO Hock Tan has major design wins recently. We already know Google Cloud is a customer, and even Apple might become one. The revenue diversification with the VMware acquisition has made Broadcom more resilient, with semiconductors now 60% of revenue compared to about 80% a year ago. 

The naysayers of the VMware acquisition need to tip their hat to Tan, who integrated the company in 12 months. And while everybody is chasing VMware customers, the skeptics will have to learn that re-certification of containers is an expensive business for enterprises. People keep complaining about VMware, but it hasn't seen major defections yet. Meanwhile, Broadcom may increase discounts if the customer attrition becomes painful." 

Tech Optimization Data to Decisions Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity vmware AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Adobe delivers strong Q4, record Firefly generations, but light outlook

Adobe delivers strong Q4, record Firefly generations, but light outlook

Adobe reported better-than-expected fourth quarter results as customers leveraged AI tools across its platform, but its outlook fell short of expectations for fiscal 2025.

The company reported fourth quarter earnings of $3.79 a share on revenue of $5.61 billion, up 11% from a year ago. Non-GAAP earnings were $4.81 a share. Wall Street was looking for non-GAAP earnings of $4.67 a share on revenue of $5.54 billion.

For the quarter, Adobe's Digital Media unit had revenue of $4.15 billion, up 12% from a year ago. Document Cloud revenue was up 17% from a year ago and Creative Cloud revenue was up 10%. Digital Experience revenue was $1.4 billion, up 10% from a year ago.

Adobe said fiscal 2024 revenue was $21.51 billion, up 11% from a year ago, with earnings of $12.36 a share ($18.42 a share non-GAAP).

Shantanu Narayen, CEO of Adobe, said Adobe saw strong demand due to "the mission-critical role Creative Cloud, Document Cloud and Experience Cloud play in fueling the AI economy." Narayen said the company's Firefly family of models was "driving record customer adoption and usage."

Indeed, Firefly generations across the Adobe platform topped 16 billion.

As for the outlook, Adobe projected fiscal 2025 revenue of $23.3 billion to $23.55 billion with non-GAAP earnings of $20.20 a share to $20.50 a share. Wall Street was looking for $20.52 a share in earnings on revenue of $23.8 billion. Adobe said it expected currency fluctuations to hit earnings.

For the first quarter, Adobe projected $4.95 a share to $5 a share in non-GAAP earnings with revenue of $5.63 billion and $5.68 billion. Wall Street was expecting revenue of $5.72 billion.

Data to Decisions Future of Work Marketing Transformation Next-Generation Customer Experience adobe B2C CX Chief Information Officer

How the San Jose Sharks Leverage Technology to Improve Guest Experience | CCE Convos

How the San Jose Sharks Leverage Technology to Improve Guest Experience | CCE Convos

🎙? Don't miss this fascinating conversation with Jonathan Becher of the San Jose Sharks. Becher explains to Holger Mueller how #technology is transforming the professional sports industry - from frictionless entry and cashless payments to dynamic advertising and advanced #analytics.

Several examples from the San Jose Sharks include...🦈🏒 

📌 Leveraging machine learning and object recognition to improve stadium security and entry processes
📌 Partnering with companies like CLEAR to enable biometric-based age verification and frictionless concessions
📌 Implementing "total takeover" advertising that dynamically inserts brand logos across the broadcast experience
📌 Exploring the use of #data and analytics to optimize player development and training

📺 ?? Watch below for a glimpse into the future of sports, where technology is not just a behind-the-scenes enabler, but a driver of fan engagement and #business transformation.

On <iframe width="560" height="315" src="https://www.youtube.com/embed/eHvdQWJWwlI?si=W1MJ-3tbf_FlgM7E" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Practical quantum computing advances ramp up going into 2025

Practical quantum computing advances ramp up going into 2025

Classiq Technologies, Deloitte Tohmatsu and Mitsubishi said they have compressed quantum circuits by up to 97% in a move that reduces error rates and may accelerate practical enterprise use cases.

The news comes as practical development in quantum computing has accelerated into 2025. Consider:

With that backdrop, it's clear that quantum computing is becoming more enterprise relevant. For instance, Classiq's collaboration with Deloitte Tohmatsu and Mitsubishi Chemical highlights one big use case: Materials development.

The companies are looking to develop new materials including new organic electroluminescent (EL) materials. By compressing quantum circuits, Classiq, Deloitte Tohmatsu and Mitsubishi Chemical said algorithms have lower error rates. "This result indicates that the circuit compression method used in this demonstration can be applied to various quantum circuits, not only in the chemical field. It is also relevant for the early practical application of quantum computers in a wide range of fields such as drug discovery, AI, finance, manufacturing and logistics," the companies said in a statement.

Constellation Research analyst Holger Mueller said:

"It is end of 2024 and in the real world, tangible use cases for quantum technology are rolling in. Today it is the turn of Classiq that is showing with partners and customers Deloitte Tohmatsu and Mitsubishi Chemical substantial acceleration of quantum based insights in new material development using Classiq tools and algorithms. This development makes appetite for more quantum based real world use cases in 2025."

Here's a look at some the other quantum computing developments in recent days.

Google launches Willow

Google launched its latest quantum chip called Willow with strong error correction improvements and outlined its roadmap for quantum computing.

Willow is part of Google's 10-year effort to build out its quantum AI operations. The company said Willow moves it along the path to commercially relevant applications.

IonQ Quantum OS, Europe launch

IonQ announced its IonQ Quantum OS and new tools for its IonQ Hybrid Suite. IonQ said the platform is designed to power its flagship IonQ Forte and Forte Enterprise quantum systems.

According to IonQ, the new OS provides an average 50% reduction in on-system classical overhead, an 85% reduction in cloud and network workloads through IonQ Cloud and more than 100x improvement on accuracy.

IonQ's Hybrid Services suite gets a developer toolkit, Workload Management & Solver Service, to move hybrid workloads to the cloud, a new scheduling feature called Sessions, and an all-new software development kit.

Separately, IonQ launched its first Europe innovation center with IonQ Forte Enterprise. The effort is a partnership with QuantumBasel and designed to serve enterprises, governments and researchers.

More: IonQ’s bet on commercial quantum computing working, acquires Quibitekk | IonQ's quantum computing bets: Quantum for LLM training, chemistry and enterprise use cases

AWS quantum moves

AWS launched its Quantum Embark Program and set off a stock-buying frenzy in quantum computing plays. The program, which is delivered by Amazon's Advanced Solutions Lab, focuses on use case discovery, technical enablement and a deep dive program.

Amazon Braket is providing the quantum compute capabilities.

Under the Quantum Embark Program, AWS is providing discovery workshops to identify use cases and how quantum computing and solve business programs. AWS is also providing workshops on how quantum computing works, runs applications and performs calculations. Deep dive is focused on more technical items and targeting applications.

At re:Invent 2024, AWS also said it is teaming up with Nvidia. Nvidia's open source quantum development environment, CUDA-Q Platform, will be added to Amazon Bracket to combine with classical cloud compute resources.

More quantum computing:

 

Data to Decisions Innovation & Product-led Growth Tech Optimization Quantum Computing Chief Information Officer

Google launches Gemini 2.0 Flash, upgraded Trillium TPU generally available

Google launches Gemini 2.0 Flash, upgraded Trillium TPU generally available

Google launched its family of Gemini 2.0 models that includes a version of Gemini 2.0 Flash as its latest Trillium TPUs become generally available.

The new models--Gemini 2.0 Flash will be available in AI Studio and Vertex AI--ride shotgun with multiple Google services as the search and cloud giant revs its agentic AI plans.

Google said that it will launch new features for Project Astra, add an agentic web exploration prototype called Project Mariner to automate browser-based tasks, and roll out Jules, an AI coding agent, to trusted testers. For good measure, Google is exploring Gemini 2.0 for Games.

The launches today are consumer focused in many ways, but make their way to Google Cloud and enterprises as well. Google's Trillium TPUs have been used to train Gemini 2.0 and offer a 4x increase in training performance and 3x gain in inference throughput relative to the previous generation TPU v5e instances. Google said Trillium instances are optimized for price and performance. 

In a blog post, Alphabet CEO Sundar Pichai said Gemini 2.0 "is our most capable model yet." "With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant," said Pichai. 

Among the key items:

  • Gemini 2.0 Flash will be available for Gemini and Gemini Advanced users on desktop and mobile Web.
  • Gemini 2.0 Flash is as fast as Gemini 1.5 Pro with gains in coding, reasoning and visual understanding.
  • Gemini 2.0 will power AI Overviews in Search this week.
  • Google is launching Deep Research, an agentic feature in Gemini Advanced on desktop and mobile web.
  • Gemini 2.0 Flash will be generally available in January with more model sizes to follow.

Google Cloud Q3 revenue up 35% from a year ago, Alphabet results shine | Google Cloud Vertex AI updates focus on the practical with Context Caching, grounding services | Google Cloud Next 2024: Google Cloud aims to be data, AI platform of choice | Google Cloud Next: The role of genAI agents, enterprise use cases

Google's announcements land as hyperscalers are racing to release their own workhorse models. Gemini 2.0 Flash is designed to be a workhorse. Amazon Web Services last week launched its Nova family of large language models and Trainium2. Meanwhile, Microsoft has been busy launching its own models as it diversifies away from OpenAI.

Here are a few other details about Gemini 2.0 Flash.

  • The model is multimodal for audio and inline image output.
  • Gemini 2.0 Flash has a bidirectional streaming API, real-time voice interactions and conversation mechanics like interruptions.
  • Google said Gemini 2.0 flash can access up-to-date information, perform calculations and interact with data sources.
  • It also has a single interface and unified SDK across AI Studio and Vertex AI.

Google DeepMind CEO Demis Hassabis and CTO Koray Kavukcuoglu wrote:

"In addition to supporting multimodal inputs like images, video and audio, 2.0 Flash now supports multimodal output like natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio. It can also natively call tools like Google Search, code execution as well as third-party user-defined functions."

Other items worth noting include:

  • Project Astra, which was outlined at Google I/O, is designed to be a universal AI assistant. With Gemini 2.0, Project Astra can converse in multiple languages and mix them, leverage Google Search, Lens and Maps, and has 10 minutes of in-session memory. 
  • Project Mariner, an early prototype built with Gemini 2.0 available via a Chrome extension. Project Mariner is 83.5% accurate working on web tasks as a single agent, but is slow. Google said the big takeaway is that it's technically possible to navigate within a browser with Project Mariner. 
  • Jules, an agent for developers, is an experimental AI-driven code agent that integrates into GitHub workflows. Jules can handle an issue, plan and execute for Python and JavaScript coding.
  • Colab, a data science agent, creates notebooks and insights for anyone who uploads a dataset. With Gemini 2.0, a user can describe analysis goals in plain language and a notebook will be created. Colab will be in the trusted tester program before rolling out broadly in the first half of 2025.  
  • Deep Research launched in Gemini Advanced, which is upgraded with Gemini 2.0 Flash. Deep Research is an agent that explores topics and generates a report based on a multi-step research plan you revise or approve. 
  • Gemini 2.0 for Games, an effort to have agents navigate video games and reason based on action on the screen and offer suggestions. Google said these experiments can build on Gemini 2.0's spatial reasoning and apply them to robotics. 

 

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer