Results

Anthropic outlines most popular Claude use cases

Anthropic's Claude models are all business as use cases are led by web and mobile application development assistant, content creation, academic research and writing, career development, optimizing AI and business strategy.

Those use cases were outlined by Anthropic using its Claude insights and observations (Clio) system, which was highlighted by Platformer. Clio is Anthropic's attempt to understand AI model use and spot potential security risks. Clio is Anthropic's Google Trends.

In a blog post, Anthropic said it preserves privacy of conversations by abstracting them into categories and clusters. All user data is anonymized and aggregated.

Here's a look at the core use cases for Anthropic's models.

Web and mobile app development account for 10.4% of use cases with 9.2% focused on content creation and communication.

It's safe to say Anthropic is positioned for enterprise use cases, but the company noted smaller Claude customers focused on dream interpretation, disaster preparedness, crossword puzzle hints and Dungeons & Dragons.

In a research paper, Anthropic also noted that usage varies by language. Spanish users are focused on economics, child health and environmental conservation. Chinese users are focused on writing crime, thriller and mystery fiction and elderly care. For Japanese, Claude usage revolves around anime and manga, economics and elderly care.

As for the system design, here's a look at how Clio is architected for analysts.

 

 

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Broadcom continues to ride AI infrastructure wave with strong Q4

Broadcom reported better-than-expected fourth quarter earnings as its AI revenue continued to surge.

The company reported fourth quarter net income of $4.32 billion, or 90 cents a share, on revenue of $14.05 billion. Non-GAAP earnings for the quarter was $1.42 a share.

Wall Street was expecting Broadcom to report non-GAAP earnings of $1.39 a share on revenue of $14.06 billion.

As for the outlook, Broadcom projected first quarter revenue of $14.6 billion, up 22% from a year ago. Broadcom also raised its quarterly stock dividend by 11% to 59 cents a share in fiscal 2025.

Broadcom CEO Tan takes VMware victory lap: Will he go shopping again? | Rivals up pressure on VMware for enterprise migrations

Fourth quarter semiconductor revenue of $8.23 billion, was up 59% from a year ago. Infrastructure software, which is dominated by VMware, was $5.82 billion, up 196% from a year ago due to the VMware acquisition.

For fiscal 2024, Broadcom reported revenue of $51.57 billion, up 44% from a year ago, with net income of $5.89 billion, down from $14.08 billion a year ago.

CEO Hock Tan said semiconductor revenue hit a record $30.1 billion in fiscal 2024 with AI revenue of $12.2 billion, up 220% from a year ago. Tan said AI revenue "was driven by our AI XPUs and Ethernet networking portfolio."

In the fourth quarter, software accounted for 41% of total revenue. A year ago, software was 21% of Broadcom's revenue.

Constellation Research analyst Holger Mueller said:

"Broadcom had a very good quarter year over year thanks to the popularity of its AI chips and the consolidation of the VMware business. And while the AI business will keep growing for Broadcom, which according to CEO Hock Tan has major design wins recently. We already know Google Cloud is a customer, and even Apple might become one. The revenue diversification with the VMware acquisition has made Broadcom more resilient, with semiconductors now 60% of revenue compared to about 80% a year ago. 

The naysayers of the VMware acquisition need to tip their hat to Tan, who integrated the company in 12 months. And while everybody is chasing VMware customers, the skeptics will have to learn that re-certification of containers is an expensive business for enterprises. People keep complaining about VMware, but it hasn't seen major defections yet. Meanwhile, Broadcom may increase discounts if the customer attrition becomes painful." 

Tech Optimization Data to Decisions Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity vmware AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Adobe delivers strong Q4, record Firefly generations, but light outlook

Adobe reported better-than-expected fourth quarter results as customers leveraged AI tools across its platform, but its outlook fell short of expectations for fiscal 2025.

The company reported fourth quarter earnings of $3.79 a share on revenue of $5.61 billion, up 11% from a year ago. Non-GAAP earnings were $4.81 a share. Wall Street was looking for non-GAAP earnings of $4.67 a share on revenue of $5.54 billion.

For the quarter, Adobe's Digital Media unit had revenue of $4.15 billion, up 12% from a year ago. Document Cloud revenue was up 17% from a year ago and Creative Cloud revenue was up 10%. Digital Experience revenue was $1.4 billion, up 10% from a year ago.

Adobe said fiscal 2024 revenue was $21.51 billion, up 11% from a year ago, with earnings of $12.36 a share ($18.42 a share non-GAAP).

Shantanu Narayen, CEO of Adobe, said Adobe saw strong demand due to "the mission-critical role Creative Cloud, Document Cloud and Experience Cloud play in fueling the AI economy." Narayen said the company's Firefly family of models was "driving record customer adoption and usage."

Indeed, Firefly generations across the Adobe platform topped 16 billion.

As for the outlook, Adobe projected fiscal 2025 revenue of $23.3 billion to $23.55 billion with non-GAAP earnings of $20.20 a share to $20.50 a share. Wall Street was looking for $20.52 a share in earnings on revenue of $23.8 billion. Adobe said it expected currency fluctuations to hit earnings.

For the first quarter, Adobe projected $4.95 a share to $5 a share in non-GAAP earnings with revenue of $5.63 billion and $5.68 billion. Wall Street was expecting revenue of $5.72 billion.

Data to Decisions Future of Work Marketing Transformation Next-Generation Customer Experience adobe B2C CX Chief Information Officer

How the San Jose Sharks Leverage Technology to Improve Guest Experience | CCE Convos

🎙️ Don't miss this fascinating conversation with Jonathan Becher of the San Jose Sharks. Becher explains to Holger Mueller how #technology is transforming the professional sports industry - from frictionless entry and cashless payments to dynamic advertising and advanced #analytics.

Several examples from the San Jose Sharks include...🦈🏒 

📌 Leveraging machine learning and object recognition to improve stadium security and entry processes
📌 Partnering with companies like CLEAR to enable biometric-based age verification and frictionless concessions
📌 Implementing "total takeover" advertising that dynamically inserts brand logos across the broadcast experience
📌 Exploring the use of #data and analytics to optimize player development and training

📺 ⬇️ Watch below for a glimpse into the future of sports, where technology is not just a behind-the-scenes enabler, but a driver of fan engagement and #business transformation.

On <iframe width="560" height="315" src="https://www.youtube.com/embed/eHvdQWJWwlI?si=W1MJ-3tbf_FlgM7E" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Practical quantum computing advances ramp up going into 2025

Classiq Technologies, Deloitte Tohmatsu and Mitsubishi said they have compressed quantum circuits by up to 97% in a move that reduces error rates and may accelerate practical enterprise use cases.

The news comes as practical development in quantum computing has accelerated into 2025. Consider:

With that backdrop, it's clear that quantum computing is becoming more enterprise relevant. For instance, Classiq's collaboration with Deloitte Tohmatsu and Mitsubishi Chemical highlights one big use case: Materials development.

The companies are looking to develop new materials including new organic electroluminescent (EL) materials. By compressing quantum circuits, Classiq, Deloitte Tohmatsu and Mitsubishi Chemical said algorithms have lower error rates. "This result indicates that the circuit compression method used in this demonstration can be applied to various quantum circuits, not only in the chemical field. It is also relevant for the early practical application of quantum computers in a wide range of fields such as drug discovery, AI, finance, manufacturing and logistics," the companies said in a statement.

Constellation Research analyst Holger Mueller said:

"It is end of 2024 and in the real world, tangible use cases for quantum technology are rolling in. Today it is the turn of Classiq that is showing with partners and customers Deloitte Tohmatsu and Mitsubishi Chemical substantial acceleration of quantum based insights in new material development using Classiq tools and algorithms. This development makes appetite for more quantum based real world use cases in 2025."

Here's a look at some the other quantum computing developments in recent days.

Google launches Willow

Google launched its latest quantum chip called Willow with strong error correction improvements and outlined its roadmap for quantum computing.

Willow is part of Google's 10-year effort to build out its quantum AI operations. The company said Willow moves it along the path to commercially relevant applications.

IonQ Quantum OS, Europe launch

IonQ announced its IonQ Quantum OS and new tools for its IonQ Hybrid Suite. IonQ said the platform is designed to power its flagship IonQ Forte and Forte Enterprise quantum systems.

According to IonQ, the new OS provides an average 50% reduction in on-system classical overhead, an 85% reduction in cloud and network workloads through IonQ Cloud and more than 100x improvement on accuracy.

IonQ's Hybrid Services suite gets a developer toolkit, Workload Management & Solver Service, to move hybrid workloads to the cloud, a new scheduling feature called Sessions, and an all-new software development kit.

Separately, IonQ launched its first Europe innovation center with IonQ Forte Enterprise. The effort is a partnership with QuantumBasel and designed to serve enterprises, governments and researchers.

More: IonQ’s bet on commercial quantum computing working, acquires Quibitekk | IonQ's quantum computing bets: Quantum for LLM training, chemistry and enterprise use cases

AWS quantum moves

AWS launched its Quantum Embark Program and set off a stock-buying frenzy in quantum computing plays. The program, which is delivered by Amazon's Advanced Solutions Lab, focuses on use case discovery, technical enablement and a deep dive program.

Amazon Braket is providing the quantum compute capabilities.

Under the Quantum Embark Program, AWS is providing discovery workshops to identify use cases and how quantum computing and solve business programs. AWS is also providing workshops on how quantum computing works, runs applications and performs calculations. Deep dive is focused on more technical items and targeting applications.

At re:Invent 2024, AWS also said it is teaming up with Nvidia. Nvidia's open source quantum development environment, CUDA-Q Platform, will be added to Amazon Bracket to combine with classical cloud compute resources.

More quantum computing:

 

Data to Decisions Innovation & Product-led Growth Tech Optimization Quantum Computing Chief Information Officer

Google launches Gemini 2.0 Flash, upgraded Trillium TPU generally available

Google launched its family of Gemini 2.0 models that includes a version of Gemini 2.0 Flash as its latest Trillium TPUs become generally available.

The new models--Gemini 2.0 Flash will be available in AI Studio and Vertex AI--ride shotgun with multiple Google services as the search and cloud giant revs its agentic AI plans.

Google said that it will launch new features for Project Astra, add an agentic web exploration prototype called Project Mariner to automate browser-based tasks, and roll out Jules, an AI coding agent, to trusted testers. For good measure, Google is exploring Gemini 2.0 for Games.

The launches today are consumer focused in many ways, but make their way to Google Cloud and enterprises as well. Google's Trillium TPUs have been used to train Gemini 2.0 and offer a 4x increase in training performance and 3x gain in inference throughput relative to the previous generation TPU v5e instances. Google said Trillium instances are optimized for price and performance. 

In a blog post, Alphabet CEO Sundar Pichai said Gemini 2.0 "is our most capable model yet." "With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant," said Pichai. 

Among the key items:

  • Gemini 2.0 Flash will be available for Gemini and Gemini Advanced users on desktop and mobile Web.
  • Gemini 2.0 Flash is as fast as Gemini 1.5 Pro with gains in coding, reasoning and visual understanding.
  • Gemini 2.0 will power AI Overviews in Search this week.
  • Google is launching Deep Research, an agentic feature in Gemini Advanced on desktop and mobile web.
  • Gemini 2.0 Flash will be generally available in January with more model sizes to follow.

Google Cloud Q3 revenue up 35% from a year ago, Alphabet results shine | Google Cloud Vertex AI updates focus on the practical with Context Caching, grounding services | Google Cloud Next 2024: Google Cloud aims to be data, AI platform of choice | Google Cloud Next: The role of genAI agents, enterprise use cases

Google's announcements land as hyperscalers are racing to release their own workhorse models. Gemini 2.0 Flash is designed to be a workhorse. Amazon Web Services last week launched its Nova family of large language models and Trainium2. Meanwhile, Microsoft has been busy launching its own models as it diversifies away from OpenAI.

Here are a few other details about Gemini 2.0 Flash.

  • The model is multimodal for audio and inline image output.
  • Gemini 2.0 Flash has a bidirectional streaming API, real-time voice interactions and conversation mechanics like interruptions.
  • Google said Gemini 2.0 flash can access up-to-date information, perform calculations and interact with data sources.
  • It also has a single interface and unified SDK across AI Studio and Vertex AI.

Google DeepMind CEO Demis Hassabis and CTO Koray Kavukcuoglu wrote:

"In addition to supporting multimodal inputs like images, video and audio, 2.0 Flash now supports multimodal output like natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio. It can also natively call tools like Google Search, code execution as well as third-party user-defined functions."

Other items worth noting include:

  • Project Astra, which was outlined at Google I/O, is designed to be a universal AI assistant. With Gemini 2.0, Project Astra can converse in multiple languages and mix them, leverage Google Search, Lens and Maps, and has 10 minutes of in-session memory. 
  • Project Mariner, an early prototype built with Gemini 2.0 available via a Chrome extension. Project Mariner is 83.5% accurate working on web tasks as a single agent, but is slow. Google said the big takeaway is that it's technically possible to navigate within a browser with Project Mariner. 
  • Jules, an agent for developers, is an experimental AI-driven code agent that integrates into GitHub workflows. Jules can handle an issue, plan and execute for Python and JavaScript coding.
  • Colab, a data science agent, creates notebooks and insights for anyone who uploads a dataset. With Gemini 2.0, a user can describe analysis goals in plain language and a notebook will be created. Colab will be in the trusted tester program before rolling out broadly in the first half of 2025.  
  • Deep Research launched in Gemini Advanced, which is upgraded with Gemini 2.0 Flash. Deep Research is an agent that explores topics and generates a report based on a multi-step research plan you revise or approve. 
  • Gemini 2.0 for Games, an effort to have agents navigate video games and reason based on action on the screen and offer suggestions. Google said these experiments can build on Gemini 2.0's spatial reasoning and apply them to robotics. 

 

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

LIVE from AWS re:Invent | Unified Data, Simplified Contracting, Accelerated Customer Innovation

Couldn't make it to #AWSreinvent? No problem - the Constellation Research team has you covered!💡🎙️ 

Constellation analysts Liz Miller, Doug Henschen, Holger Mueller and Larry Dignan went live from the Amazon Web Services (AWS) recording studio to break down the biggest #news from #reInvent2024 and what it means for #enterprises. Topics ranged from SageMaker announcements to the hardware fueling AWS's #AI capabilities.

Watch the full discussion to hear Constellation's take on how AWS is transforming its platform to deliver unified #data, simplified contracting, and accelerated customer #innovation.

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/Ck9y1RQYONc?si=sVXqgoVvmmJC3ROW" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

How Singapore's M1 Cloud Transformation Led to Personalized Mobile Plans | Supernova Award Spotlight

SuperNova Award Winner Marco Chekovic, Chief Digital Officer at M1, shares with Larry Dignan how the Singaporean telco underwent a major digital transformation to meet changing customer needs and stay competitive in a crowded market.

Here are a few key takeaways:

 📌 Adopted a cloud-first, microservices-based architecture to enhance agility and time-to-market
📌 Leveraged a data lake and advanced analytics to enable hyper-personalized consumer offerings
📌 Partnered with leading technology providers like Salesforce to build a flexible, future-proof stack

Watch the full interview for an inspiring story of how a traditional telco can reinvent itself through bold digital initiatives. Read the article by Larry Dignan here: https://www.constellationr.com/blog-news/insights/supernova-award-spotlight-how-singapores-m1-cloud-transformation-led-personalized

On Insights <iframe width="560" height="315" src="https://www.youtube.com/embed/v3Eo067aE-4?si=-PMJg8CfEaFDUYHY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Supernova Award Spotlight: How Singapore's M1 Cloud Transformation Led to Personalized Mobile Plans

M1, Singapore's first digital network operator, has made personalization the cornerstone of its digital transformation, but first it had to retool its infrastructure.

We caught up with Cetkovic Marko, Chief Digital Officer at M1, to talk about the company's Supernova Award for Customer Experience, application consolidation and moving 90% of its systems to the cloud.

M1's approach set up a business model where it can personalize wireless plans at scale.

Here's a look at some of the takeaways.

The project. In 2019, M1 had a new leadership team and it was clear that the Singapore market needed digital offerings and expectations for personalization. "M1 has a very competitive market with four large operators and dozens of national broadband networks for about 6 million people," said Marko. "Internally we realized that we needed faster time to market and reaction time. Our legacy systems had data scattered across multiple databases. Our vision was to have hyperpersonalized plans."

Personalization. Today, M1 can have a personalized plan for each customer. That personalization allows M1 to micro segment customers using data from a data lake and its customer data platform. "We can provide hyper personalized content and offerings to the customer when they are interacting with us," said Marko.

Legacy systems and tech debt. Marko said in M1 decided to adopt a cloud-first vision with microservices and applications based on API connections.

"We really completely changed the delivery paradigm that was more like a design, deliver and manage to something that was more agile based on discovering a solution and based on the best of breed products stitched together. Doing all of that was really brand new to us, but also to our partners. It required a lot of learning, a lot of cultural shift as well, and persistent leadership."

The data strategy. M1 adopted the data lake as it switched to its best-of-breed model across channels. "As we were developing the solution, we were developing our data capabilities and ingesting loads that were most critical and hen enriching them," said Marko. "We also are reaching for our AI/ML capabilities to fuel our personalization engine. It was evolutionary and also revolutionary with a fresh outlook."

GenAI. Marko said that M1's cloud strategy and data lake sets it up for generative AI. He said:

"The other part of our innovation journey is also partnering (with Infosys and others) in terms of capabilities and investments in innovation with pilots and proof of concepts. We are exploring together with partners to actually hit that kind of a secret formula for the future when it comes to Gen AI. This year is about awareness and learning and raising the capabilities internally. 2025 will be more about the specific use cases and outcomes."

Build vs. buy. M1's approach is to partner with strategic vendors like Infosys, Nokia and Salesforce to evolve products. From there, M1 tailors applications and develops applications. "We really had to integrate and continually develop this solution over 30 releases. Nothing was really ready made and delivered," said Marko, who added that the customization and build approach is focused on competitive advantage.

More interviews:

 

Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Future of Work AI ML Machine Learning LLMs Agentic AI Generative AI Analytics Automation B2B B2C CX EX Employee Experience HR HCM business Marketing SaaS PaaS IaaS Supply Chain Growth Cloud Digital Transformation Disruptive Technology eCommerce Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Customer Service Content Management Collaboration M&A Enterprise Service CCaaS UCaaS Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer

SAP TechEd 2024: Top 3 Takeaways With Holger Mueller

📣 SAP TechEd 2024 was packed with exciting new announcements critical for the SAP ecosystem. Watch below for Constellation analyst Holger Mueller's top takeaways ⬇️ 

💡 Joule - SAP's AI assistant is now integrated across SAP #applications, providing code explanations and logical reasoning capabilities. It helps onboard users to the SAP Business Technology Platform, automating otherwise complex tasks. Developers can extend Joule with their own #AI models.

💡 ABAP Empowerment - ABAP is no longer a second-class citizen, with full integration into SAP Build. #Developers can now choose the best tool for the job - professional code, low-code/no-code, or ABAP - all within a single pane of glass.

💡 Data Lake Support - SAP is addressing the limitations of HANA with a new #datalake capability. This open, scalable #data platform will be a critical foundation for AI and hashtag#analytics across SAP and non-SAP data sources.

What were your key takeaways from #SAPTechEd 2024? Let Holger know your thoughts!

On <iframe width="560" height="315" src="https://www.youtube.com/embed/HsoLY-4oNUg?si=UNqEqjURNKhTjBMU" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>