Results

Hitachi Vantara expands AI infrastructure footprint

Hitachi Vantara in recent weeks has built out its AI infrastructure offerings with the aim of expanding its market share and genAI footprint in data centers.

The company said that its Hitachi iQ platform is now available for Nvidia HGX systems. Hitachi iQ coupled with Nvidia HGX provides tailored systems for multiple industries and use cases including inferencing, large language models, model training, analytics and digital twins.

According to Hitachi Vantara, Hitachi iQ with Nvidia HGX combines storage, networking and servers with Nvidia H100 and H200 Tensor Core GPUs along with Nvidia AI Enterprise. Hitachi iQ became generally available in July.

Hitachi iQ with Nvidia HGX offers enhanced data processing of digital file systems, zero copy architecture, an updated Hitachi Content Software for File platform that combines the latest AMD EPYC servers with Nvidia InfiniBand or Ethernet networking.

Last week, Hitachi Vantara announced a new quad level cell (QLC) flash storage array with public cloud replication and an object storage appliance as part of its Virtual Storage Platform One platform. The new systems are designed for AI and analytics workloads.

Key points include:

  • Hitachi Vantara offers dual port QLC media to deliver data access if a hardware failure occurs.
  • QLC flash storage has more density and lower power consumption than traditional systems.
  • Virtual Storage Platform One Block is the QLC flash storage array with public cloud replication.
  • Hitachi Vantara is using Samsung's dual ported 30TB QLC media on VSP One Block.
  • Virtual Storage Platform One Object is a storage appliance that has multi-node configurations for industries like media, healthcare and finance.
  • The QLC flash options complement Virtual Storage Platform One SDS Cloud, which has seamless replication from on-premises to AWS.

To round out its AI efforts, Hitachi Vantara outlined a partnership with Hammerspace to data observability, integration and workload tools with AI infrastructure.

Under the partnership, Hammerspace technology will be used in Hitachi Vantara's converged systems for AI workloads as well as Hitachi iQ.

The companies said Hammerspace's data orchestration software will enable Hitachi Vantara to unify data access, provide data for in-place AI or consolidate data into a central data lake.

More:

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

IBM Redefines the 'Science of Consulting with Generative AI

R "Ray" Wang recently sat down with Mohamad Ali of IBM Consulting, to discuss the transformative power of #generativeAI in the consulting industry...

Ali shared how IBM is redefining the 'Science of Consulting' by seamlessly integrating #technology like #LLMs and #digital workers into their end-to-end service offerings. From driving efficiency and cost savings to unlocking new #business models and capabilities, this conversation offers a glimpse into the future of the #consulting profession. 🔮

Learn more about #AI-powered consulting becoming a reality. ⬇️

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/iqVwtyhIKj0?si=6DGfCa8PiOgLwtAA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

API platform Kong raises $175 million in venture funding

Kong, a startup focused on cloud API technologies, raised $175 million Series E financing at a valuation of $2 billion.

The round was led by Tiger Global and Balderton and included additional investment from Andreesen Horowitz, Index Ventures and others. Kong has raised $345 million.

According to Kong CEO Augusto Marietti, the funding will help the company build out its API platform to manage, secure and observe internal and external APIs and expand sales operations. Kong is looking to capitalize on a surge in API call demand largely due to cloud connections to large language models.

Constellation ShortListâ„¢ API Management (APIM)

Constellation Research analyst Holger Mueller said:

"It's good to see funding going to vendors who can bridge the need of AI for data and process answers. In the 21st century these answers should come from APIs. These capabilities will be critical to power the next wave of genAI, which will be able to look into transaction data and APIs for better automation of next generation applications."

Kong provides an API platform so enterprises in multiple industries can build applications and centralize security, governance and visibility. The company's products include Kong Cloud Gateway, AI Gateway, Mesh, Gateway Manager and Insomnia, a development tool.

Tech Optimization Chief Information Officer

Microsoft launches Copilot Actions, Agents in SharePoint at Ignite

Microsoft rolled out more than 80 new products and features to round out its Copilot and artificial intelligence stack as it moved to position itself as an early enterprise AI leader and a platform that can provide model choices.

At Ignite in Chicago, Microsoft made the case that enterprises are betting on its platform. The company noted that about 70% of the Fortune 500 use Microsoft365 Copilot and cited Blackrock as a company that has consolidated on Microsoft Azure.

The flurry of announcements from Ignite came just a few days before Amazon Web Services' re:Invent conference. Simply put, Microsoft is using Ignite to front-run its vision of generative AI and agentic AI orchestration as AWS is likely to hit similar themes.

At a high level, Microsoft is reinforcing recent themes. For instance, Microsoft wants every employee to leverage Copilot as a personal assistant with AI agents being rolled out to automate business processes. These agents would be designed and built in Copilot Studio to automate processes. Microsoft said enterprises will have multiple agents to orchestrate.

During the Ignite keynote, Microsoft CEO Satya Nadella aimed to position the company's army of agents and Copilots as productivity enhancers that drive returns. 

Returns on investment--whether it was agents, performance, hardware and cloud--was a common theme for Nadella. Microsoft launched Copilot Analytics to track returns on agents and copilots. 

"After users start using copilot and all these agents, one of the fundamental things that all business leaders want to do is to figure out and measure ROI," said Nadella.

Here's a look at some of the more interesting Ignite announcements:

  • Copilot Actions. Copilot Actions are designed to enable employees to automate everyday tasks like getting a summary of meetings, return from vacation emails and summaries across apps.
  • Agents in SharePoint. Microsoft said SharePoint will get an agent that can ground information in corporate content. Users can customize agents that are focused on specific SharePoint assets.
  • Turnkey agents. Microsoft outlined agents that are in various stages of preview. Interpreter offers real-time translation. Employee Self-Service Agent is in private preview in Business Chat and can answer most common questions and actions in HR and IT tasks.
  • Azure AI Foundry. Azure AI Foundry is designed to give companies a common platform to create, customize and manage AI apps. Azure AI Foundry includes all Azure AI services and tooling with previews of Azure AI Foundry ADK, Azure AI Foundry portal and Azure AI Foundry Agent Service.
  • Model choices with 1,800 in Azure AI Foundry with experiment tools so customers can choose the best large language models. 
  • Windows 365 Link. Microsoft launched a device that's connected to Windows 365 and will be available in April for $349. Windows 365 Link is a spin on the thin client and doesn't have local data or apps.
  • Microsoft Fabric enhancements. The company launched the preview of Fabric Databases, which includes SQL to create a unified data platform that can leverage transactional and analytic data. Customers will be able to automatically replicate apps to OneLake and autoscale databases.

In addition, Microsoft Fabric will get Open Mirroring, a capability that can bring any app, data provider or data store to OneLake. Microsoft also said OneLake catalog is also generally available.

Constellation Research's take

Constellation Research analyst Holger Mueller said:

Microsoft pushes ahead with AI, adding agentic capabiliies - and even showing a software demo at the IT centric Ignite. Microsoft was able to 'protect' the Copilot franchise by adding agentic capabilities - the challenge is that OpenAI models lack in performance - as the conversational customer service demos showed (even in demo mode). The way to hide and bundle complexity is via the Application Server and the applicaton server is back with Azure Ai Foundry. It was good to see the variety of Azure compute coming - and the focus on pushing Azure Arc further - to the edge. Finally, things are happening in quantum and MIcrosoft is going down the logic qubit route by partnering with Quantinuum and Atom Computing." 

Constellation Research analyst Doug Henschen said the Fabric news may be notable for enterprises. Henschen said:

"Fabric was previously focused entirely on analytical use cases, but with the release of Fabric Databases and, specifically, the SQL database in Fabric Option, the platform now supports the development of operational (a.k.a. transactional) applications on the same platform. The tie between operational and analytical is further cemented by automated replication from SQL database into OneLake and the new Open Mirroring option, whereby data from any app or data source can be mirrored with low-latency, change data capture capabilities, whereby only the changes in data are continuously replicated to OneLake.

These options are very much like the “No ETL” ties AWS has introduced between database services such as Amazon Aurora and Amazon Redshift. Nonetheless, the addition of an operational database option and support for CDC replication rounds out Fabric as a platform for data-driven applications as well as data-driven insights."

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity Microsoft AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Nvidia outlines Google Quantum AI partnership, Foxconn deal

Nvidia said that it is working with Google Quantum AI to design quantum computing AI processors. The quantum computing partnership with Google was part of a series of Nvidia announcements at the SC24 conference in Atlanta.

According to Nvidia, Google Quantum AI is using the Nvidia Cuda-Q platform to simulate designs. Google Quantum AI uses Cuda-Q, a hybrid quantum-classical computing platform and the Nvidia Eos supercomputer to simulate the physics of its quantum processors.

Hybrid quantum computing is moving to the forefront since there's the potential to solve complex commercial problems sooner. Specifically, Google Quantum AI generates simulations based on the 1,024 Nvidia H100 Tensor Core GPUs in the Nvidia Eos supercomputer.

The quantum partnership with Google was one of the main items outlined during Nvidia CEO Jensen Huang's keynote. Huang highlighted AI applications for science including drug discovery, climate forecasting and quantum computing.

"AI will accelerate scientific discovery, transforming industries and revolutionizing every one of the world’s $100 trillion markets," said Huang.

In addition, Nvidia said it is scaling production via a partnership with Foxconn. The company also announced the general availability of the Nvidia H200 NVL, a PCIe GPU based on the Nvidia Hopper architecture for low-power, air-cooled data centers.

Other Nvidia items at SC24 include:

  • CorrDiff NIM and FourCastNet NIM, two new microservices for climate change modeling and simulation on the Earth-2 platform. The Earth-2 platform is a digital twin for simulating weather and climate conditions.
  • cuPyNumeric library, which uses GPUs to accelerate NumPy for applications in data science, machine learning and numerical computing.
  • Nvidia launched the Nvidia Omniverse Blueprint for real-time development of digital twins.
  • Nvidia highlighted its open-source BioNeMo Framework, which is used for drug discovery. The company also launched DiffDock 2.0, which is a tool for predicting how drugs bind to target proteins.
  • The company also highlighted the Nvidia Alchemi NIM microservice which couples generative AI to chemistry.

More on Nvidia:

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

GenAI's 2025 disconnect: The buildout, business value, user adoption and CxOs

The gap between generative AI's "build it and they will come" folks and the enterprises looking for actual business value may be widening. The next year will be interesting for generative AI trickledown economics.

First, let's touch on the boom market.

  • Nvidia will report third quarter earnings Nov. 20 and rest assured it'll be the most important report of the year (again). Funny how we say that about Nvidia every quarter. Demand will remain off the charts and the hyperscalers and countries are begging to spend billions on Nvidia's AI accelerators. Analysts are looking for non-GAAP third quarter earnings 74 cents a share on revenue of $32.94 billion, up from $18.12 billion a year ago.
  • CoreWeave, the AI hyperscale cloud provider, just closed a minority investment round of $650 million led by Jane Street, Magnetar, Fidelity and Macquarie Capital, with additional participation from Cisco Investments, Pure Storage, BlackRock, Coatue, Neuberger Berman and others. CoreWeave is valued at about $23 billion, according to Reuters.
  • Softbank Corp. will be among the first to build out an AI supercomputer using Nvidia's Blackwell platform. Softbank will get Nvidia's first Nvidia DGX B200 systems with plans to build out a Nvidia DGX SuperPOD supercomputer. Softbank floated debt to be first in line.

And on it goes. You could round up the AI building boom weekly. All of the hyperscalers are spending big on AI factories. Microsoft, Amazon, Meta and Alphabet all said capital spending on AI will continue to surge.

Microsoft CFO Amy Hood said:

"Roughly half of our cloud and AI-related spend continues to be for long-lived assets that will support monetization over the next 15 years and beyond. The remaining cloud and AI spend is primarily for servers, both CPUs and GPUs, to serve customers based on demand signals.”

I couldn't help but think of what Oracle CTO Larry Ellison said on the company's most recent earnings call. Ellison said: "I went out to dinner with Jensen (Huang) and Elon (Musk) at Nobu in Palo Alto. I would describe the dinner as begging Jensen for GPUs. Please take our money. In fact, take more of it. You're not taking enough of it. It went well. The demand for GPUs and the desire to be first is a big deal."

We know the drill. The tech industry is betting that the real risk is not plowing billions (if not trillions) on the AI buildout. I'd recommend reading a contrarian argument about irrational AI data center exuberance. but your guess is as good as mine on the timing.

The disconnect between the AI buildout side and the business value side may be widening. 2025 is going to be a year of business value following 2024, which was about genAI production following proof of concepts in 2023. Yes, enterprises are going to need real gen-AI budgets in 2025, change management and returns.

Bottom line: The trickledown economics of generative AI at the beginning of 2024 hasn’t exactly trickled down beyond the infrastructure layer.

Can vendors monetize genAI value?

For real genAI value to occur, the application layer will need to be built out. LLM players, think Anthropic and OpenAI, are going to need apps to go with their models. Vendor monetization models are a bit fuzzy now. ServiceNow is clearly benefiting, but other software companies are seeing mixed results. Here's a sampling of recent comments and there will be a bunch more as SaaS earnings season kicks off soon.

Monday CEO Eran Zinman:

"Total AI actions grew more than 250% in Q3 compared to Q2. And the AI blocks grew 150% from Q2. So overall, we see more and more customers adopt those blocks, people incorporate them into their automation. They create a lot of processes within the product that involves AI within that. And over time, we are planning to roll out the monetization tied with AI, where we're going to generate clear and efficient value for our customers."

Zinman was then asked whether 2025 will be the year for AI monetization. "We don't have a specific date, but it might be in 2025," said Zinman. "But we can't commit to that."

Translation: Monday needs to show value to get the money.

Hood danced around monetization, but did say that analysts need to think about the long game.

"We remain focused on strategically investing in the long-term opportunities that we believe drive shareholder value. Monetization from these investments continues to grow, and we're excited that only 2.5 years in, our AI business is on track to surpass $10 billion of annual revenue run rate in Q2. This will be the fastest business in our history to reach this milestone."

Infosys CEO Salil Parekh said:

"Any of the large deals that we’re looking at, there’s a generative AI component to it. Now, is it driving the large deals? Not in itself, but it’s very much a part of that large deal."

SAP CEO Christian Klein said Klein said about 30% of SAP's cloud orders included AI use cases.

Simply put, if you follow the money you'd trip once you got past the infrastructure layer and to the applications. Enterprise software vendors haven’t figured out what customers will pay for.

The beginning of genAI user fatigue?

What about the users? Well, that genAI love affair has become tiresome too.

A Slack survey found that generative AI adoption among desk workers went from 20% in September 2023 to 32% in March 2024 and then hit a wall. Today, 33% of desk workers are using generative AI, according to Slack. The survey also found that excitement around AI is cooling and drop from 47% to 41% from March to August. Slack cited uncertainty, hype and lack of AI training for the decreases. Another possible reason I'll throw in: There's a copilot sprawl that's adding to costs for the enterprise and distraction for the worker.

This genAI adoption from employees is a bit chicken or egg. If enterprises balk at spending on various copilots, they're going to limit access. Or there's just not enough value for employees yet. I can't tell you how many times I've tuned out Microsoft Copilot, Google Gemini and other overly helpful AI. Dear LLM, if you're useful I'll reach out. Until then don't annoy me.

It’s on you CxOs

These moving parts—genAI to agentic AI, FOMO, data strategies, vendor promises and change management—are going to be challenging to navigate for CxOs. I mined my recorded conversations in 2024 to surface common AI themes from CxOs. Here’s a look:

GenAI is a tool instead of a magic bullet. CxOs are looking to integrate AI into processes and workflows and use the technology as an excuse to revamp them. Agentic AI is promising, but full automation will require orchestration and process groundwork.

Change management is everything. Change management has turned up in multiple conversations throughout 2024. Implementing AI is really about transforming how people work and interact with the technology. GenAI also will create organizational challenges. CxOs also need strong change management approaches to address employee fears about job displacement.

Governance and control matters. Governance is becoming a key theme as genAI projects move to production.

Data strategy. Enterprises are still working on their ground games when it comes to data. That work will continue for many in 2025.

Costs. Enterprises will begin to focus more on cost of compute, open source models, small language models and the use cases that drive the most value for the money. There will be some hard conversations between enterprises and software vendors.

Training. Everyone is talking about upskilling and training, but it is unclear whether this education is happening.

Iterative implementation. The mantra with genAI has been to "just get started," but that approach has led to AI debt already with copilot sprawl, difficulty changing models and user dissatisfaction. Will 2025 be better for the slower movers that spent 2024 refining the data strategy?

Employee-AI collaboration. CxOs are trying to solve for human-in-the-loop approaches so employees feel more empowered working with AI.

This genAI cognitive dissonance is worth watching in 2025. Either the value reaches vendors and enterprises or we're going to have a massive AI buildout hangover.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Alibaba's cloud unit shows AI traction in Q2

Alibaba's cloud unit is now approaching a $17 billion annual revenue and the company said it will continue to invest in AI services.

The company's Cloud Intelligence Group reported fiscal second quarter revenue of $4.22 billion, up 7% from a year ago, due to "double-digit public cloud growth, including increasing adoption of AI-related products."

As for profits, Alibaba Cloud Intelligence Group reported second quarter EBITA of $379 million.

Alibaba's cloud division said AI services grew at a triple-digit pace. "We will continue to invest in anticipation of customer growth and in technology, particularly in AI infrastructure, to capture the increasing trend of cloud adoption for AI," the company said in a statement.

Constellation ShortListâ„¢ Global IaaS for Next-Gen Applications

Despite trade restrictions in China, Alibaba has been investing in generative AI. The company recently open sourced its Qwen 2.5 large language models and has been cutting prices for AI workloads. Alibaba's cloud division cut prices for API calls and upgraded infrastructure to boost efficiency.

Alibaba, best known for its e-commerce properties, reported second quarter net income of $6.25 billion on revenue of $33.7 billion, up 5% from a year ago.

In a statement, Alibaba CEO Eddie Wu said:

"We entered into long-term collaborations with industry peers to broaden payment and logistics services on Taobao and Tmall platforms, which we expect will accelerate our overall growth. Growth in our Cloud business accelerated from prior quarters, with revenues from public cloud products growing in double digits and AI-related product revenue delivering triple-digit growth. We are more confident in our core businesses than ever."

Data to Decisions Tech Optimization SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

Quantum computing all in on hybrid HPC with classical computing

Quantum computing vendors are all bullish on hybrid supercomputing approaches that'll play well with the generative AI boom.

This hybrid mantra--with a heavy dose of commercial use cases today--started a year ago and has now become a common theme in the quantum computing industry. A few recent events include:

The catch with these hybrid HPC efforts is that the hardware is dramatically different. The good news, however, is that the software ecosystem around quantum computing is rapidly developing.

In recent research, Constellation Research analyst Holger Mueller noted that quantum networking and hardware advances have been critical. Mueller has argued that the industry would have seen the year of quantum computing if it weren't for the buzz around generative AI. The hardware side of quantum computing is reaching maturity, but software will be what bridges hybrid quantum-classical supercomputers. These quantum-classical supercomputers will likely be consumed as a service through cloud vendors.

He said:

"We are seeing the hardware side of quantum technology reaching stability and maturity through the quarters of 2024—so the focus switches to quantum software development kits (SDKs), the software libraries that map quantum problems to quantum hardware."

For now, CxOs should investigate use cases for quantum computing now and in the future, plan investments and realize how critical the software stack will be, said Mueller.

Data to Decisions Tech Optimization Innovation & Product-led Growth Quantum Computing Chief Information Officer

Cisco Q1 shows better demand, networking struggles continue

Cisco reported better-than-expected first quarter results and said it saw "acceleration in product orders reflecting normalizing demand."

But Cisco's first quarter networking revenue fell 23% from a year ago.

The company said it saw product orders jump 20% in the first quarter compared to a year ago and 9% excluding the Splunk acquisition.

Cisco reported first quarter earnings of 68 cents a share on revenue of $13.8 billion, down 6% from a year ago. Without Splunk, Cisco's first quarter revenue would have been down 14%. The first quarter earnings include a tax benefit of $720 million.

Non-GAAP first quarter earnings were 91 cents a share. Wall Street was expecting Cisco to report non-GAAP earnings of 87 cents a share on revenue of $13.77 billion.

As for the outlook, Cisco projected second quarter revenue of $13.75 billion to $13.95 billion with non-GAAP earnings of 89 cents a share to 91 cents a share. For fiscal 2025, Cisco projected revenue of $55.3 billion to $56.3 billion with non-GAAP earnings of $3.60 a share to $3.66 a share.

Cisco CEO Chuck Robbins said "our customers are investing in critical infrastructure to prepare for AI."

By the numbers:

  • Networking revenue in the first quarter was $6.75 billion, down 23% from a year ago.
  • Security revenue was $2.017 billion, up 100% from a year ago.
  • Collaboration revenue was $1.085 billion, down 3%.
  • Observability revenue was $258 million, up 36%.
  • Services revenue was $3.727 billion, up 6%.

On a conference call with analysts, Robbins said Cisco's portfolio is set up to capture demand for AI training infrastructure, AI networks and connectivity and observability.

He added that Cisco's new AI survey with Nvidia will ship in December and AI pods are available. Robbins said:

"As we look at what's occurring with AI, there are three key things. First, there is significant investment in back end AI networks with hyperscalers focused on training. Second, as enterprises look to adopt and deploy AI, they need to modernize and secure their infrastructure to prepare for pervasive deployment of AI applications. Finally, the combination of mature back end models with enterprise AI application deployment will lead to increased capacity requirements on both private and public front end cloud networks. Cisco is already playing a major role across all three of these significant opportunities."

Constellation Research analyst Holger Mueller said:

"Cisco can't catch growth for it's offerings. It was a struggle to grow in the cloud era and it seems to be one in the AI era as well. While 20% growth on product seems to be encouraging (unless you have not forgotten the previous focus on services) but half of it comes from the Splunk acquisition. Adjust for Splunk and Cisco is in medium single digit growth. When and where will Cisco grow again? Referring to government contracts as CFO Scott Herren did is not sustainable growth."

 

Tech Optimization Data to Decisions cisco systems Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Balancing AI, Innovation, and Resilience at Jewelers Mutual Group | BT150 Spotlight

In the latest BT150 Spotlight, Constellation Insights editor in chief Larry Dignan sits down with John Kreul, Chief Information Officer at Jewelers Mutual Group. They discuss how the insurance company is using #AI to enhance the #customerexperience for retail jewelers, personal line customers, agents, and employees. Kreul shares insights on Jewelers Mutual Group's approach to building vs. buying AI capabilities, the metrics they use to measure progress, and the importance of the human element in driving change. The conversation also covers future-proofing strategies and the role of AI in improving operational efficiency.

On Insights <iframe width="560" height="315" src="https://www.youtube.com/embed/uxPUK661nKM?si=ewb4uBpZJF9JeTXX" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>