Results

LIVE from Zoholics with Constellation Analysts

LIVE from Zoholics with Constellation Analysts

Constellation Research analysts R "Ray" Wang, Doug Henschen, Andy Thurai, and Chirag Mehta tune in live from Zoholics at the Convention Center in Austin, Texas to discuss the latest announcements, trends, and technology from the Zoho conference, including how Zoho is enabling the ability to drive down margin compression.

Watch the full interview!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/s04MGck2Z3A?si=nP9v-fnzTTUD8B45" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Oracle partners with Google Cloud, takes on OpenAI workloads

Oracle partners with Google Cloud, takes on OpenAI workloads

Cloud hell may have just frozen over as Oracle and Google Cloud outlined a partnership that gives enterprises an option to combine Oracle Cloud Infrastructure and Google Cloud.

For any enterprise technology veteran who remembers Oracle and Google duking it out in court over Android and Java, the partnership is almost like a trip through bizarro world. Oracle also has a similar partnership with Microsoft Azure. To that end, OpenAI is also now buying capacity from Oracle via the OCI and Azure tie-up.

Simply put, Oracle Cloud has nailed being an efficient cost-effective infrastructure and has aligned up with all the large players not named Amazon Web Services (AWS). Should a similar deal be cut with AWS and OCI hell will really freeze over. Oracle CTO Larry Ellison jabs at AWS as much as he did SAP back in the day.

Nevertheless, Ellison wouldn't rule out a AWS partnership either. 

"Customers are using multiple clouds not only infrastructure clouds, but they might have Salesforce applications or Workday applications or they use multiple cloud clouds in their in their business right now. We think that these are all the clouds become interconnected. We are thrilled we have that connection with Microsoft and the same thing with Google Cloud. We'd love to do the same thing with AWS. 

We think we should be interconnected to everybody. And that's what we're attempting to do within our multi cloud strategy. I think that's what customers want. I'm optimistic that that's the way the world will settle out. We'll get rid of these fees for moving data from cloud, the cloud. And all the clouds will be interconnected."

Here are the details of the Oracle and Google Cloud tie-up followed by the OpenAI deal and the not-too-surprising strong cloud results.

Google Cloud and OCI

Google Cloud's Cross-Cloud Interconnect will be initially available in 11 global regions so customers can deploy general purpose workloads without data transfer charges. Later in 2024, the companies said Oracle Database @ Google Cloud will be available with the database, network, feature and pricing at OCI.

Ellison said the deal with Google Cloud is good for joint customers. Sundar Pichai, CEO of Google and Alphabet said the partnership will combine Oracle's database and applications with its AI layer.

Oracle Database @ Google Cloud would give customers direct access to OCI database services deployed in Google Cloud data centers. That combination would enable Vertex AI and Gemini from Google Cloud to be leveraged without latency. For Google Cloud, the Oracle partnership can embed its AI services into more enterprises. Customer references cited by Google Cloud for AI adoption include Bayer, Best Buy, Discover Financial and TD Bank to name a few. Also see what Equifax and Wayfair have done with Google Cloud.

Google Cloud Next 2024: Google Cloud aims to be data, AI platform of choice | Google Cloud Next: The role of genAI agents, enterprise use cases

The two companies said they will jointly sell Oracle Database @ Google, provide support and a unified experience and then provide a bevy of services to mix and match.

Australia East (Sydney), Australia Southeast (Melbourne), Brazil East (São Paulo), Canada Southeast (Montreal), Germany Central (Frankfurt), India West (Mumbai), Japan East (Tokyo), Singapore, Spain Central (Madrid), UK South (London), and US East (Ashburn) are the regions that will initially launch. 

Constellation Research analyst Holger Mueller said the Google Cloud and Oracle partnership makes sense. 

"When something works, tech vendors copy the playbook: In this case it is the Oracle / Microsoft partnership that allows to run OracleDB @ Azure from the Azure console. Now the same gets repeated for Google Cloud. This is great news for enterprises, and good for Google Cloud and Oracle. And it makes it harder for cloud databases to compete with Oracle for building Next Generation Applications."

Other details about the Google Cloud and OCI partnership include:

  • In 11 regions, customers can move data across Google Cloud and Oracle. 
  • Joint customers will be able to access and deploy Exadata and Autonomous Database later this year through Google Cloud. 
  • Customers can bring their licenses and get access to support. 
  • If you use your GCP commitment to consume extra data, OCI will credit you toward support costs. 
  • Field teams are aligned. 
  • Oracle services across the board will be used in Google Cloud including Oracle apps. 
  • The two clouds will be available in one console. 
  • Interconnects will be up to 100 gigabit per second.
  • Google customers can take their license and deploy on Google Cloud if self managed. 
  • Pricing parity will be available on both clouds. 

Oracle CEO Safra Catz said the Google Cloud deal highlights broader traction and the ability to integrate applications as well as cloud infrastructure. 

"A very large enterprise tech company signed a contract in Q4 for over $600 million where we will be helping them transform their operations with Fusion to enable them to become more agile, faster, growing and more profitable. We will replace out many of our competitors products." 

OpenAI buys OCI capacity

OpenAI will use OCI for additional capacity in a partnership with Microsoft Azure and Oracle. Microsoft is already using OCI for additional workloads.

"We are delighted to be working with Microsoft and Oracle. OCI will extend Azure's platform and enable OpenAI to continue to scale," said Sam Altman, OpenAI CEO, in a statement.

OpenAI needs all the capacity it can get. The company is scaling fast, constant training models and needs to serve 100 million monthly users with its ChatGPT services.

Ellison said Oracle ramped up the Azure partnership in the fourth quarter with 11 of the 23 planned OCI datacenters inside Azure going live.

Ellison outlined some of the OpenAI partnership details. 

"We're building a very large data center with lots of Nvidia chips. The new Nvidia chips and new Nvidia interconnect liquid cooled. These are primarily for training. The training goes beyond languages because they're neural networks trained with not just language but math and masses of images as well. That's a very different problem than answering a question posed by someone. Everyone that's big is going to be training their models on imaging. That's a huge amount of additional data and training and we're right in the middle of it."

The Google Cloud, OpenAI deal trump earnings miss

Oracle also reported fourth-quarter earnings that missed expectations. The company reported fourth-quarter earnings of $1.11 a share on revenue of $14.3 billion, up 3% from a year ago. Non-GAAP earnings were $1.63 a share.

Wall Street expected fourth quarter earnings of $1.65 a share on revenue of $14.57 billion.

Cloud revenue in the fourth quarter was $5.3 billion, which missed estimates of $5.45 billion. Nevertheless, cloud infrastructure revenue in the fourth quarter was up 42% from a year ago. Cloud application revenue was $3.3 billion, up 10%.

For fiscal 2024, Oracle delivered net income of $10.5 billion, or $3.71 a share on revenue of $53 billion.

Oracle CEO Catz said the company "signed the largest sales contracts in our history—driven by enormous demand for training AI large language models in the Oracle Cloud." Remaining performance obligations exiting the fourth quarter was $98 billion, up 44% from a year ago. She did note that currency fluctuation continues to be a wild card to results.  

In fiscal 2025, Catz said Oracle will continue to gain workloads. "I also expect that each successive quarter should grow faster than the previous quarter—as OCI capacity begins to catch up with demand," said Catz. "In Q4 alone, Oracle signed over 30 AI sales contracts totaling more than $12.5 billion—including one with Open AI to train ChatGPT in the Oracle Cloud."

AWS, Microsoft Azure, Google Cloud battle about to get chippy

"Customer conversations are now absolutely only focused on our cloud services," said Catz. "It is all about our comprehensive, highly differentiated and secure cloud offering. Customers have progressed from their initial curiosity about Oracle Cloud into full blown rollout."

Catz said that OCI consumption revenue would have been higher if it weren't for supply side constraints. 

"As on-premise databases migrate to the cloud, either to OCI directly or using database Azure or Google Cloud. we expect these cloud database services will be that third leg of revenue growth alongside OCI," she said.  

Other items:

  • Oracle won't break out Cerner's impact. Catz would typically give guidance that excluded Cerner. 
  • As for the outlook, Catz said OCI capacity will meet demand and boost revenue each quarter in fiscal 2025. 
  • OCI will grow faster in fiscal 2025 than the 50% this year. 
  • Capital expenditures will double in fiscal 2025. 
  • Oracle will hone its outlook going forward in October at Oracle World. 
  • Revenue in the first quarter will grow 6% to 8%. 
  • Total cloud revenue will be 21% to 23% in the first quarter. 
  • Non-GAAP EPS is expected to grow between 11% to 15% and be between $1.33 and $1.37.

What's next? Bigger and smaller OCI data centers


Ellison said OCI's integrated stack is uniquely suited for AI. "When you charge by the minute, faster also means less expensive. OCI trains large language models several times faster and at a fraction of the cost of other clouds," he said. "The operating system and the database are fully autonomous."

Ellison said Oracle also occupies less space with OCI and that's why it can park infrastructure in Azure and Google Cloud facilities.  He said:

"We talked for a while about our ability to build very small data centers when needed. Virtually any one of our customers could choose to have the full Oracle Cloud in their data center. We can start very small." 

Ellison, however, noted that Oracle will also go big. He said:

"We're now bringing 200 megawatt data centers online. We are literally building the smallest most portable, most affordable cloud data centers, all the way up to 200 megawatt data centers, ideal for training very large language model and keeping them up to date. This AI race is gonna go on for a long time. It's not a matter of getting ahead in AI, but you also have to keep your model current. And that's going to take larger and larger data centers. Some of the data centers we have that we're planning are absolutely even bigger."
 

More on genAI dynamics:

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Oracle Big Data SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

How Virginie Nowak blended employee and customer experiences at Access Bank

How Virginie Nowak blended employee and customer experiences at Access Bank

V. Nowak, Group Chief Customer Experience Officer at Access Bank PLC, had an interesting problem to solve: How do you maintain and improve customer experience at a bank with multiple touchpoints and employee turnover in an emerging market?

Access Bank is a Nigerian multinational commercial bank that started in corporate banking before expanding into personal and business banking in 2012. The company, which has more than 28,000 employees, features more than 700 branches and service outlets across 21 countries and serves more than 65 million customers. The bank is one of Africa's largest retail banks.

When Nowak (right) joined Access Bank over two years ago, Access Bank's customer satisfaction rate was at 54% with a negative Net Promoter Score. The bank improved its customer satisfaction rate to 64% for fiscal 2023 and moved its Net Promoter Score from the negatives, to 13, to today's 23 across its channels.

The CX improvements and approach to technology by Virginie at Access Bank PLC was recognized by Avaya at its CX Force Awards. The awards recognized CX leaders at Avaya's ENGAGE 2024 conference in May.

"My job is to create a seamless experience and make sure that I remove friction. I don't want to keep a customer on the call 3, 5, or 10 minutes because first I need to authenticate and the customer has to hit one or two buttons," explained Virginie.

To transform Access Bank's customer experience, Nowak set out to reach customers through multiple touchpoints leveraging Avaya's solutions including video banking, conversational Interactive Voice Response (IVR) systems, and voice biometrics with real-time analytics. Nowak also sprinkled gamification to keep agents motivated to respond to customers.

Here's a look at the key parts of Access Bank's CX transformation:

Technology: Nowak chose to use the Avaya Experience Platform™ (AXP) for digital efforts and Avaya Elite Voice for on-premises. The hybrid plan included Voice Biometrics, Conversational IVR, Video Banking, Call Back assist, Auto QM, Real time Speech Analytics, Microsoft Dynamics CRM integration with Avaya as Agent Interface, Enhanced Reporting, Dashboards and Gamification. The stack was chosen to revamp IVR to make it easier to use, and to give agents real-time information and speech analytics for context on customer issues.

As for the roadmap ahead, Access Bank plans to add AXP Connect and build a new chatbot with 14 self-service workflows, with avenues to AXP chat agents to escalate problems. Nowak chose an architecture that allows the addition of new technology and the room to move completely to the cloud when ready, which pairs well with Avaya’s core value proposition ‘Innovation Without Disruption’. Through this strategy, Avaya empowers organizations like Access Bank to choose their own cloud journey, rather than risky and costly ‘rip-and-replace’ cloud migration strategies that many companies eagerly chase.

"When it came to upgrading our technology, I needed to ensure it's sustainable," said Nowak. "By 2027, our ambition is to expand to 125 million customers, so I need to make sure we are prepared to serve this number of customers."

She also is looking to use blockchain, AI and large language models (LLMs) to reduce failed transactions, speed up dispute resolutions, and serve Hausa language speakers, the second largest community in Africa. "We are good at mitigating risk, so we can adjust after we try something with new technology on a small scale," said Nowak.

Metrics that matter. Nowak, who has spoken at multiple conferences about African business, customer experience, and digital banking, has taken a continuous improvement approach to CX at Access Bank. The metrics back up the CX improvement:

  • Voice response time in the Access Bank contact center has improved from 2:25 minutes in fiscal 2022 to 38 seconds in fiscal 2023 and 30 seconds in the first quarter.
  • Email response time in the contact center has improved from more than 144 hours in fiscal 2022 to just under 7 hours in the first quarter.
  • Contact center first contact resolution has improved to 74% in the first quarter from 70% in fiscal 2022.
  • Digital channel customer satisfaction perception has improved from 63% in fiscal 2022 to 80% in the first quarter.

Today, voice improvements are critical, but Access Bank will see digital interactions increase over time. "I would say about 80% of our interactions with our customers are still voice," said Nowak.

Industry collaboration. Nowak frequently works within industry groups to improve operations across banking, and to better integrate with Nigeria's Central Bank. She spearheaded an African banking group for quarterly CX reviews. Notable wins include forming a group of card point-of-sale processors to review best practices and speed up dispute reviews.

Employee experience translating to good customer services. One of Access Bank's big challenges with its CX statement was the growth of new staff and turnover. Nowak instituted CX training within a week of employee onboarding at the bank and implemented KPIs that are part of performance reviews.

"There is no CX without EX," she said. "We're trying to really help the business see the potential pain points that a customer would get and that an employee would get, so you can remove friction and make it seamless."

Other CX Force winners include:

  • CX for Education: Tara Pasalic, Systems Integration Specialist, McMaster University. Pasalic was an early adopter of cloud contact center as a service and a longtime Avaya customer. McMaster University focused on improving experiences for international students leveraging SMS messaging and call center resources.
  • CX for Employees: Jayne Hogle, Director of Unified Communications, American Heart Association. Hogle said customer experience at the American Heart Association is really about being heard. The technology plan has focused on everything from automated responses to call-to-ticket workflows and self-service tools made possible with a migration to the cloud.
  • CX for Healthcare: Rafael Sousa, Chief Technology Officer, Hospital Nipo-Brasileiro (HNIPO). Sousa led an effort to link customer experience, hospital operations and patient outcomes. Leveraging Avaya, Sousa has been able to provide faster and more personalized assistance to patients and routing them to appropriate departments.
  • CX for Good: Ian Cole, Chief Innovation Officer at Give Kids the World Village. Cole is responsible for creating and delivering experiences for critically ill children and their families. Cole's company is in Florida's Lightning Alley and needed to improve communications reliability during natural disasters. He moved from analog lines to SIP and leveraged Avaya, e911, cloud and AI to bolster reliability.
  • CX for Growth: Hugh Carr, Director of Customer Services, Standard Focus. Carr has been able to leverage Avaya customer experience technologies to improve experiences and boost revenue growth. By focusing on customer journeys, Standard Focus is reducing costs per contact by leveraging bots for easy issues and using humans for complex items. The result is customer trust and more revenue.
  • Rising CX Superstar: Emily Stubbs, Director of CX, Aerflo. Stubbs has focused on funneling CX data into business intelligence tools to build views that head off customer issues before they happen. The proactive approach is critical to product launches and customer experiences associated with them.
Next-Generation Customer Experience Chief Information Officer

Top 150 Digital Transformation Executives Pioneering Innovation with Disruptive Technologies

Top 150 Digital Transformation Executives Pioneering Innovation with Disruptive Technologies

We are thrilled and honored to announce the nominees for the 2025 Business Transformation 150 (BT150). The executives named on this year’s BT150 exemplified outstanding leadership and disruptive innovation over the past twelve months.

In today's fast-paced business environment, disruptive and innovative technologies like Generative AI, automation, and machine learning are playing a crucial role in accelerating digital transformation across all industries. As a result, digital leaders who possess unique expertise in these cutting-edge technologies are taking on more responsibility and earning their seat on the executive leadership team. Several of these leaders have even been promoted to the CEO position, thanks to their ability to balance stakeholder and shareholder interests with a clear focus on driving innovation and growth. We're excited to announce that the 2025 BT150 includes several new members who embody these shared traits of digital leadership and are at the forefront of using these technologies to drive their organizations forward.

Over the past six months, BT150 nominations have been submitted by peers, industry influencers, technology vendors and analysts. It was a vigorous process to determine the final listing, and we are excited to recognize the executives today and at our fourteenth anniversary of CCE
 

Congrats again to the listed leaders:

  • Ted Abebe, President, Operations Technology at UPS
  • Nilanjan Adhya, Chief Digital Officer at Blackrock
  • Marco Agenti, CIO at Goldman Sachs
  • Julie Averill, EVP, CIO at Lululemon
  • Liz Bacelar, Executive Director, Global Tech Innovation at Estee Lauder
  • Erik Barthel, Chief Information and Digital Officer at National Grid
  • Suvajit Basu, CIO and Head of Technology at Goya Foods
  • Mona Bates, Digital Technology and Chief Information Officer at Collins Aerospace
  • Behshad Behzadi, CTO and Chief AI Officer at Sportradar
  • Vidhya Belapure, CIO at Ilsa, SPA
  • Bob Benoit, CIO at Gates Foundation
  • Parminder Bhatia, Chief AI Officer at GE Healthcare
  • Jason Birnbaum, CIO at United Airlines
  • Michael Brooker, SVP and CIO at Synaptics
  • Robin Brown, CIO at Cirrus Aircraft
  • Jen Cardello, SVP, Head of UX Research & Insights at Fidelity Investments
  • Rich Carter, SVP and Chief Digital Officer at Eli Lilly
  • Joseph Cevetello, CIO at City of Santa Monica
  • Andrew Chen CIO at Envista Holdings
  • Indy Cho, AVP of Data Science and Analytics at Costco
  • Mark Costa, Chief Digital Officer at JCDecaux North America
  • Tracey Cournoyer, Vice President & CIO/COO - Bond and Specialty Insurance at Travelers
  • Terri Couts, SVP and Chief Digital Officer at Guthrie Clinic
  • Tom Cullen, CIO at Choboni
  • Vagesh Dave, Global Vice President & Chief Information Officer at McDermott International
  • Helen Davis, SVP & Head of NA Operations at The Kraft Heinz Company
  • Samir Desai, Chief Digital + Technology Officer at Abercrombie & Fitch Co
  • Archie Deskus, CTO at Paypal
  • Judy Dinn, EVP and CIO at TD US Bank
  • Paul Dongha, Group Head of Data & AI Ethics, Chief Data & Analytics Office at Lloyd's Banking Group
  • Kumar Dronamraju, CIO of Digital Solutions at Toyota North America
  • Brian Dummann, Chief Data Officer and Vice President of Technology Innovation & Architecture at AstraZeneca
  • Pamela Dyson, CIO at PCAOB
  • Jodi Euerle Eddy, Chief Information and Digital Officer at Boston Scientific
  • Bridget Engle, CIO at Bank of NY
  • John Fitzpatrick, Senior MD, CTO at Blackstone
  • Kristy Folkwein, SVP and CIO at ADM
  • Kathleen Boutté Foster-Gee, Chief Information Officer at City of Sunnyvale
  • Francisco Fraga, Chief Information and Technology Officer at McKesson
  • Michelle Froah, Global Marketing and Chief Innovation Officer at ETS
  • Rita Fuller, CVP for Data Science and AI at New York Life Insurance
  • Carissa Ganelli, SVP Marketing, Products, and Technology at Equinox
  • Curt Garner, Chief Customer and Technology Officer at Chipotle Grill
  • Shannon M Gath, CIO at Teradyne
  • Mark Gingrich, CIO at Surescripts
  • Seemantini Godbole, EVP, Chief Digital and Information Officer at Lowes
  • Sowmya Gottipati, Vice President, Global Supply Chain Technology at The Estée Lauder Companies
  • DeDwayne Griffin, Chief Information and Digital Officer at Insight Global
  • Avidypta (Avi) Guha, IT VP- Wood Products Transformation, Enterprise Architecture and Automation Leader at Weyerhaeuser
  • Jason Hair, Chief Digital Officer at Westpac
  • Neil Hampshire, Chief Information and Digital Officer at Ocean Spray
  • George Hanna, Chief Technology and Digital Officer at LA Clippers
  • Jon Harding, SVP and Global CIO at Conair
  • Tim Harris, Chief Digital and Information Officer at ATI
  • Shawn Harrs, CIO at Red Lobster
  • Rachel Hayden, CIO at Scansource
  • Hanna Hennig, CIO at Siemens
  • Reggie Henry, Chief Information Officer at American Society of Association Executives (ASAE)
  • Chitra Herle, EVP, Global CIO at GM Financial
  • Karen Higgins-Carter, Chief Information and Digital Officer at Gilbane
  • Cindy Hoots, Chief Digital Officer and CIO at Astrazeneca
  • Elizabeth Horton, CIO at Tuff Shed
  • John Howard, SVP Enterprise Data and Analytics at Signet Jewelers
  • Bryan Hutson, SVP Information Services at JM Smucker
  • Omar Jacques Omran, Chief Technology Officer at Six Flags
  • Charu Jain, SVP Merchandising and Innovation at Alaska Airlines
  • Yogaraj (Yogs) Jayaprakasam, SVP and Chief Technology and Digital Officer at Deluxe Corp.
  • Ganesh Jayaram, Chief Information and Digital Officer at American Airlines
  • Shannon (Varley) Johnston, SVP and Chief Information Officer at Global Payments
  • Raj Kadam, SVP, Health Plan Operations & CIO at Liberty Dental Plan
  • Sarah Karthigan, GM Strategy & Innovation, Enterprise Data and Insights at Chevron
  • Tracy Kerrins, Global CIO at Wells Fargo
  • Monica Khurana, Chief Information Officer at Dodge & Cox
  • Anthony Kosturos, CFO at 29 Palms
  • Warren Kudman, SVP and CIO at Turner Construction
  • Goran Kukic, SVP Chief Information Officer Health at Reckitt
  • Rashmi Kumar, SVP and Global CIO at Medtronic
  • Gopal Kumarappan, Head of Engineering, Product & Operations (Digital & Platform Services) / Managing Director at JP Morgan Chase
  • Jarek Kutylowski, CEO & Founder at Deep L Translate
  • Jorn Lambert, Chief Product Officer at Mastercard
  • John LaPlante, CIO at Extended Stay America
  • Matt Lasmanis, Chief Technology and Innovation Officer at Sage Therapeutics
  • Maria Latushkin , GVP, Technology and Engineering at Albertsons
  • Gene Lee, Chief Data and Analytics Officer at Caesars Entertainment
  • Bob Leek, CIO at Clark County, Nevada
  • Mojgan Lefebvre, Executive Vice President Chief Technology & Operations Officer at Travelers
  • Lo Li Carper, Chief Technology Officer, Managing Vice President Bank Technology at Capital One
  • Lisa Lou , Vice President of Strategy and Technology at ADT
  • Praveen Madhavankutty, Chief Information Officer at The Save Mart Companies
  • Kyall Mai, Chief Innovation Officer at Esquire Bank
  • William Mayo, SVP Research IT at Bristol Myers Squibb
  • Bron McCall, Chief Information and Digital Officer at Savage
  • Jeff McMillan, Managing Director and Head of Firm-Wide AI at Morgan Stanley
  • Dara Meath, SVP and CTO at Build-a-Bear
  • Brad Miller, CIO at Moderna
  • Jared Miller, EVP and CIO at Sphere Entertainment Co.
  • Elaine Montilla, CTO at Pearson
  • Tracy Mozena, CIO at Atlantic Aviation
  • David Mulligan, Chief Operating Officer at QBE North America
  • Leena Munjal, Chief Strategy Officer at St Jude's Children Hospital
  • Deborah Muro, Chief Information Officer at El Camino Hospital
  • Mark Murphy, Chief Information and Digital Officer at 3M
  • Michael Naggar, Chief Digital Officer at Citi
  • Rucha Nanavati, CIO at Mahindra Group
  • Andrew Nebus, Senior Director, Defense Programs at ASRC Federal
  • Anthony Noble, SVP and Chief Strategy Officer at American Tower
  • Patrick  Noon, SVP, Chief Information & Digital Officer at Bechtel
  • Onome Okuma, Chief Digital Officer at Chick-fil-A
  • Andy Paisley, Chief Digital Officer at Ferguson Enterprises
  • Gil Perez, Chief Innovation Officer, Head of Innovation Network, AI/ML, and Corporate VC Group at Deutsche Bank
  • Tamir Peres, VP and CIO at Herc Rentals
  • Thomas Phelps, SVP at Laserfiche
  • Surabhi Pokhriya, Chief Digital Growth Officer at Church and Dwight
  • Andy Quick, Chief AI Officer at Entergy
  • Allison Radecki, Chief Digital and Information Officer at Havi
  • Ashwin Rangan, SVP Engineering and CIO at ICANN
  • Aaratee Rao, Managing Director at JP Morgan Chase
  • Harish Rao, Vice President of Data Analytics at Costco
  • Sunitha Ray, Vice President, IT & Digital at SharkNinja
  • John Repcko, Global CIO at AIG
  • Janet Robertson, CIO, VP of Enterprise Application Services at Raytheon
  • Parul Saini, Head of IT at Uber
  • Rebecca Salsbury, CTO at Financial Times
  • Raju Sankuratri, Chief Information Officer at Aramark
  • Keith Sarbaugh, Chief Information Officer at Zoetis
  • Melissa Scheppele, SVP and CIO at AO Smith
  • Erik Severinghaus, Co-CEO at Bloomfilter
  • Hena Shamim Jalil, CIO at BT Group
  • Brian Shield, CTO at Boston Red Sox
  • Cedric Sims, SVP of Enterprise Innovation and Integration at MITRE
  • Adam Smith, CTO at FedEx
  • Michael Spandau, SVP and Global IT at Fender
  • Scott Spradley, CTO at Lennar
  • Jim Stathopoulos, Chief Information Officer at Sun Country Airlines
  • Gülay Stelzmüllner, CTO at Allianz Technology
  • Elizabeth Stone, CTO at Netflix
  • Sehr Thadhani, Chief Digital Officer at Nasdaq
  • John Trainor, CTO at Wahoo Fitness
  • Steve Turk, Chief Data & Analytics Officer, Commercial Banking at JP Morgan Chase
  • Eileen Vidrine, Chief Data and AI Officer at US Air Force
  • Karla Viglasky, Chief Information Officer at Evans Network of Companies
  • Dan Vinh, CMO at Culinary Institute of America
  • Chad Wallace, EVP Global Head or Commercial Solutions at Mastercard
  • Laurie Wheeler, Chief Operating Officer, Information Services & Technology at MultiCare Health System
  • John Winn, Managing Director at Blackstone
  • Gabrielle Wolfson, CDO and CIO at Quest Diagnostics
  • Johnny Wu, EVP and Chief Clinical Officer at Centurion Health
  • Rowena Yeo, CTO and Global VP of Technology Services at Johnson and Johnson
  • Angela Yochem, Global Chief Information Officer at Krispy Kreme
  • Sherril Zack Kaplan, Chief Digital Officer at MMA Global


This prestigious recognition and induction ceremony will be held at Constellation’s Connected Enterprise in October 2024.

For more details about the listed executives, visit: https://www.constellationr.com/business-transformation-150-2024-2025

 

Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Chief Analytics Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Information Security Officer Chief Marketing Officer Chief People Officer Chief Privacy Officer Chief Procurement Officer Chief Revenue Officer Chief Supply Chain Officer

Fortinet picks up Lacework to beef up its Fortinet Security Fabric

Fortinet picks up Lacework to beef up its Fortinet Security Fabric

Fortinet is acquiring Lacework as the cybersecurity platform battle heats up.

Lacework specializes in Cloud-Native Application Protection Platform (CNAPP). Fortinet said it will add Lacework's AI-powered platform to its Fortinet Security Fabric. Lacework has more than 1,000 customers.

According to Fortinet, Lacework will bring an agent and agentless architecture for data collection, a homegrown data lake and code security offering. By integrating Lacework's CNAPP into Fortinet's lineup, the company is betting that its more complete AI cybersecurity stack will woo enterprises looking to consolidate vendors. 

Palo Alto Networks earlier this year set off the debate with a plan to bet that it could be the leading cybersecurity platform. Although the company said it has seen strong interest from customers, it's far too early to say the debate is settled.

Cybersecurity platformization: What you need to know | CrowdStrike delivers strong Q1 amid cybersecurity platform debate

Terms of the deal weren't disclosed. Constellation Research analysts Chirag Mehta said the Fortinet move gives it a portfolio that will compete with Palo Alto Networks offerings. In a post on X, Mehta noted that Fortinet will have a much broader portfolio to take on Palo Alto Networks.

Here's how the combination stacks up:

Speaking on Fortinet's first quarter earnings call, CEO Ken Xie said the company betting it can win amid the vendor consolidation.

"I think during the slowdown of the macro environment, competitors started to be more aggressive with discounts. But from all angles, we see we have much better product position, much broad like infrastructure coverage and better service, and also both on the performance angle. The product definitely has performed much better for the same function, same cost, and same time. It is more about how we can increase the coverage, increase the lease and pipeline, and also to meet the customer need in this big environment change."

 

Digital Safety, Privacy & Cybersecurity Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Privacy Officer

Apple's genAI strategy: On-device processing, private cloud, own the integration and abstract the LLMs

Apple's genAI strategy: On-device processing, private cloud, own the integration and abstract the LLMs

Apple CEO Tim Cook outlined the company's generative AI strategy at Apple WWDC that revolves around Apple Intelligence, a personal intelligence system, on-device processing of large language models and a private cloud model to go along with an OpenAI partnership.

Apple's tagline was that Apple Intelligence is "AI for the rest of us."

The stakes were high going into Apple's WWDC as Wall Street and the tech sector were closely watching how the company would approach generative AI as Google, Microsoft and a bevy of other technology giants have been regularly launching new features. In the end, Apple's real mission with generative AI was to spur another iPhone upgrade cycle. There was enough meat with Apple's strategy to give customers an excuse to upgrade. 

Cook set up the AI approach to generative models. He said:

"We've been using artificial intelligence and machine learning for years. Recent developments in generative intelligence and large language models offer powerful capabilities that provide the opportunity to take the experience of using Apple products to new heights. As we look to build on these new capabilities, we want to ensure that the outcome reflects the principles at the core of our products. It has to be powerful enough to help with the things that matter most to you. It has to be intuitive and easy to use. It has to be deeply integrated into your product experiences. Most importantly, it has to understand you and being grounded in your personal context, like your routine, your relationships, your communications and more."

Craig Federighi, SVP of Software Engineering, said the approach to Apple Intelligence is to go horizontal and systemwide. 

"With iOS 18, iPad OS 18 and macOS Sequoia, we are embarking on a new journey to bring you intelligence that understands you. Apple intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad and Mac."

Apple Intelligence capabilities

Apple Intelligence will provide large language model (LLM) capabilities in the background systemwide for writing, prioritizing and tackling daily tasks. Apple Intelligence will also provide image capabilities with personalization tools by contacts.

Personal context was a key talking point for Federighi. "Understanding this kind of personal context is essential for delivering truly helpful intelligence. but it has to be done right. You should not have to hand over all the details of your life to be warehoused. and analyzed in someone's AI cloud with Apple intelligence, powerful intelligence goes hand in hand with powerful privacy," said Federighi.

Apple also outlined Apple Intelligence foundation models and favorable comparisons to other LLMs when running on Apple hardware. Apple said in its developer state of the union talk that anything running on Apple's cloud or device is its own models. 

Not surprisingly, Apple went for the killer app of generative AI and it's problem GenEmoji, which can create emojis on the fly based on whatever you cook up.

What Apple is really doing is creating an abstract layer that keeps the experience and device integration while leveraging LLMs underneath. See: Foundation model debate: Choices, small vs. large, commoditization

Architecture is all about private cloud

Apple's plan is to process Apple Intelligence queries on device with a semantic index. Apple said much of the AI processing will be on device, but some will go to servers in a system called Apple Private Cloud Compute.

A request will be analyzed to see if it can be done on device. Only data that's required to fulfill a request will be sent to a server, run on Apple Silicon.

Federighi explained:

"We have created Private Cloud Compute. Private cloud computing allows Apple intelligence to flex and scale its computational capacity and draw on even larger server base models for more complex requests, while protecting your privacy. These models run on servers we've specially created using Apple silicon. These Apple silicon servers offer the privacy and security of your iPhone from the silicon on, draw on the security properties of the Swift programming language and run software with transparency built in.

When you make a request, Apple Intelligence analyzes whether it can be processed on device if it needs greater computational capacity. It can draw on private cloud compute, and send only the data that's relevant to your task to be processed on Apple's silicon servers. Your data is never stored or made accessible to Apple. It's used exclusively to fulfill your request. And just like your iPhone, independent experts can inspect the code that runs on the servers to verify this privacy promise. In fact, private cloud computing cryptographically ensures your iPhone, iPad and Mac will refuse to talk to a server unless its software has been publicly logged for inspection. This sets a brand-new standard privacy and AI."

Siri's brain transplant

Apple Intelligence will give Siri a new brain and leverage settings data and other device information. Siri will take actions inside apps on your behalf and across apps.

Siri will move through the system to find a set of actions for various intentions across applications and know your personal context.

Apple said it will continue to roll out Siri features based on data already held. Siri has been configuring settings and asking questions for years. This repository to date has basically served as a personal context engine that can now be surfaced via Apple Intelligence.

The goal for Apple is to make Siri more intelligent, helpful and integrated with you.

Many of the features across Apple native apps can be found elsewhere. Apple Intelligence essentially looked like a spin on Google's Gemini, OpenAI's ChatGPT or Microsoft's Copilot. The big difference is the architecture used--Apple Silicon to Apple Silicon in a data center--and a horizontal approach that's more Amazon Q than app specific.

Federighi said Apple Intelligence will be free across its iPhone, iPad and Mac devices with the latest OS updates. ChatGPT will be free on Apple devices without an OpenAI account. There will be upgrade opportunities for OpenAI with new models on tap in the future.

Key points about the ChatGPT integration:

  • ChatGPT will be available in Apple's systemwide tools and native apps. 
  • IP addresses sent to OpenAI are obscured and requests aren't stored. 
  • ChatGPT's data-use policies only apply for users that connect accounts. 
  • Apple will use GPT-4o for free without creatin an account. 
  • Paid features will be available to OpenAI subscribers. 

For developers, Apple said it will layer LLMs into various SDKs.

"This is the beginning of an exciting new chapter of personal intelligence, intelligence built for your most personal products. We are just getting started," said Federighi.

Apple's AI strategy came at the end of what was a series of incremental software updates across its collection of operating systems. Here's a look at what's being added to select Apple platforms.

Constellation Research's take

Constellation Research CEO Ray Wang said "the main differential Apple has is their philosophical view." 

"Privacy at the core means Apple designs AI to be mindful. The AI must work for the user first not the network. It's a different way of looking at what AI can do," said Wang. "Apple is late but they have time to do it right."

In addition, Apple's 1 billion devices in the field make a great model training set.

Constellation Research analyst Andy Thurai said that Apple's AI strategy is different because of the privacy approach, custom processors, hybrid approach and multimodal features that will go mainstream. "Apple's doubling down on its core value of privacy, even in the AI age," said Thurai in a LinkedIn post. "This is a MAJOR differentiator from other vendors who often prioritize data collection over user protection."

Constellation Research's Holger Mueller said:

"Apple's new AI capabilities are not only a few years behind what Google users have available, but now also what Microsoft users can do. Creating a ‘private’ cloud in the public cloud is the price Apple has to pay to keep the ‘fig leaf on Cook’s ‘differential privacy’ going. The OpenAi deal maybe the backdoor to bolster Apple Intelligence with super big LLM for real world awareness – which is a hint that Apple sees Google more and more like a competitor."

Vision OS 2

Apple said Apple Vision Pro will roll out to more countries including China, Japan and Singapore June 13 for preorders. Customers in Australia, Canada, France, Germany and UK can preorder June 28. With the rollout, Apple is betting on more sales and distribution for the 2,000 apps designed specifically for Apple Vision Pro.

The company's spatial computing efforts have a heavy dose of entertainment and video updates with Vision OS 2, but there was a bevy of business updates.

Apple said utility apps such as AirLauncher, GlanceBar, Splitscreen, Screens 5, and Widgetsmith and easier pairing will boost productivity in Vision OS 2. Apple also rolled out new frameworks and APIs to build 3D apps, anchor apps to flat surfaces and enable use cases by industries.

iOS 18

iOS 18 updates included personalization, support for RCS so Android users won't be singled out and bullied for having green bubbles and incremental features across native apps notable Photos, Maps and Messages.

Watch OS

Apple is adding training features to measure training load, recovery time and performance during individual workouts. These features have been in Garmin devices for years.

iPad OS

iPad OS will get a floating bar at the top with many of the customization features in iOS 18. Documents will be surfaced more easily by application. There were a few Apple Pencil enhancements worth noting. Adding space in for inserting words into handwriting was a nice touch.

Mac OS Sequoia

Mac OS gets many of the features found in iOS and iPad OS as well as tools such as Continuity to make the hand off between Apple devices more seamless such as iPhone Mirroring. Keychain was also updated for better password management.

More:

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity apple AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Event Report: IBM Think Accelerates Accessible Gen AI For Clients

Event Report: IBM Think Accelerates Accessible Gen AI For Clients

Get the live analysis on IBM Think and the implications for customers. Constellation Research principal analyst and CEO, R "Ray" Wang shares his analysis of the announcements from IBM Think and what it means for customers starting off in their Generative AI journey.

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/wzuBgnRHieg?si=KYyyFJNAK2POVcgO" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

AI infrastructure is the new innovation hotbed with smartphone-like release cadence

AI infrastructure is the new innovation hotbed with smartphone-like release cadence

Don't look now, but AI-optimized servers and infrastructure may be the most trendy innovation corner of technology. And the release cycle looks like it was ripped out of the Apple and Samsung playbooks.

For those that need a refresher since the smartphone industry has been boring in recent years, here's the cadence the sector used to revolve around.

  1. Apple announces new software plans at WWDC.
  2. Apple launches new iPhone with integrated stack, new processors and updates that are billed monumental but actually have been used in Samsung devices for years.
  3. Tech buyers gobble up whatever iProduct comes out.
  4. Rinse and repeat year after year and collect money with a dash of complimentary products and services.

For you Android folks, you can swap Samsung or Google instead of Apple.

That playbook is still being utilized, but let's face it: Smartphones are a bit of a yawner these days. The new hotspot is AI infrastructure, specifically AI servers.

Nvidia also happens to be the new Apple with an integrated stack of hardware, software and ecosystem to build AI factories and train large language models (LLMs).

At Computex, Nvidia said it will move to an annual cycle of GPUs and accelerators along with a bunch of other AI-optimized hardware. "Our company has a one-year rhythm. Our basic philosophy is very simple: build the entire data center scale, disaggregate and sell to you parts on a one-year rhythm," said Nvidia CEO Jensen Huang.

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly and is brought to you by Hitachi Vantara.

Constellation Research analyst Holger Mueller said: "Nvidia is ratcheting up the game by going from one design in 2 years to one design per year. This cadence is a formidable challenge for R&D, QA and sourcing in a supply chain that's already constrained. We will see if AI hardware is immune from new offerings stopping the sale of the current offerings."

Just a few hours later after Nvidia’s keynote, AMD CEO Lisa Su entered the Computex ring with her own one-year cadence and roadmap. AMD, the perennial No. 2 chipmaker in most categories, is going to rake in gobs of money being the alternative to Nvidia.

The GPU may just be the new smartphone that drives tech spending. This AI stack also has a downstream effect on Nvidia partners such as Huang fave Dell Technologies as well as Supermicro and HPE.

You'd never know it from the stock fall in the last week, but Dell has been kinda cleaning up the AI server category. Supermicro is doing well too. These OEMs will ride along with the annual GPU cadence. And now HPE is joining the parade as enterprises buy AI systems.

Jeff Clarke, Chief Operating Officer at Dell Technologies, said the backlog for AI-optimized servers was up 30% in the first quarter to $3.8 billion. The problem is the margins on those servers aren't up to snuff yet.

Lenovo is also seeing strong demand. Kirk Skaugen, President of Lenovo's Infrastructure Solutions Group, said the company's "visible qualified pipeline" was up 55% in its fiscal fourth quarter to more than $7 billion. Note that visible pipeline isn't backlog.

"In the fourth quarter, our AI server revenue was up 46%, year-to-year. On-prem and not just cloud is accelerating because we're starting to see not just large language model training, but retraining and inferencing," said Skaugen. "We'll be in time to market with the next-generation NVIDIA H200 with Blackwell. This is going to put a $250,000 server today roughly with eight GPUs, will now sell in a rack like you're saying up to probably $3 million in a rack."

Clarke noted that Dell Technologies will sell storage, services and networking around its AI-optimized servers. Liquid cooling systems will also be in a hot area. He said:

"We think there's a large amount of storage that sits around these things. These models that are being trained require lots of data. That data has got to be stored and fed into the GPU at a high bandwidth, which ties in network. The opportunity around unstructured data is immense here, and we think that opportunity continues to exist. We think the opportunity around NICs and switches and building out the fabric to connect individual GPUs to one another to take each node, racks of racks across the data center to connect it, that high bandwidth fabric is absolutely there. We think the deployment of this gear in the data center is a huge opportunity."

Super Micro Computer CFO David Weigand has a similar take. "We've been working on AI for a long time, and it has driven our revenues the past two years. And now with large language models and ChatGPT, its growth has obviously expanded exponentially. And so, we think that will continue and go on," said Weigand. "We think it's going to be both higher volume and higher pricing as well, because there is no doubt about the fact that accelerated computing is here to stay."

HPE's latest financial results topped estimates and CEO Antonio Neri said enterprises are buying AI systems. HPE's plan is to differentiate with systems like liquid cooling, one of three ways to cool systems. HPE also has traction with enterprise accounts and saw AI system revenue surge accordingly. Neri said the company's cooling systems will be a differentiator as Nvidia Blackwell systems gain traction.

Neri said: "We have what I call 100% or liquid cooling. And this is a unique differentiation because we have been doing 100% direct liquid cooling for a long time. Today, there are six systems in deployment and three of them are for generative AI. As we go to the next generation of silicon and Blackwell systems will require 100% direct liquid cooling. That's a unique opportunity for us because you need not only the IP and the capabilities to cool the infrastructure, but also the manufacturing side."

Indeed, there's a reason we're watching Huang and Su keynotes and dozing off as yesterday's innovation juggernauts speak at various conferences. AI infrastructure is just more fun.

And just to bring this analogy-ridden analysis home, it's worth noting that Intel spoke at Computex too. Yes, Intel is playing from behind on AI accelerators and processors, but is playing a role in the market. The role? Midmarket player for the most cost-conscious tech buyer.

If Nvidia is Apple. And if AMD is more like Google/Samsung. Then Intel is positioned to play Motorola and the not-quite premium phone role when it comes to AI. At Computex, Intel CEO Pat Gelsinger went through the cadence of AI PC possibilities and new Xeon server chips. And then Intel said this about the company's Gaudi 3 accelerators, which should be available in the third quarter.

"A standard AI kit including eight Intel Gaudi 2 accelerators with a universal baseboard (UBB) offered to system providers at $65,000 is estimated to be one-third the cost of comparable competitive platforms. A kit including eight Intel Gaudi 3 accelerators with a UBB will list at $125,000, estimated to be two-thirds the cost of comparable competitive platforms."

ASUS, Foxconn, Gigabyte, Inventec, Quanta and Wistron join Dell, HPE, Lenovo and Supermicro with plans to offer Intel Gaudi 3 systems.

Bottom line: AI systems will create a big revenue pie even if most of the spoils go to Nvidia. "I'm very pragmatic about these things. Today in generative AI, the market leader is Nvidia and that's where we have aligned our strategy. That's where we have aligned our offerings," said Neri. "Other systems will come in 2025 with other accelerators."

More on genAI:

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia Big Data SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Don't forget the non-technical, human costs to generative AI projects

Don't forget the non-technical, human costs to generative AI projects

Don't forget the non-technical costs with generative AI projects as there are a bevy of ongoing maintenance to consider, said Lori Walters, Vice President, Claims and Operations Data Science at The Hartford.

Speaking at an Amazon Web Services (AWS) financial services event for analysts this week, Walters provided several takeaways about generative AI efforts and how they fit into the broader picture. I'll have more takeaways from a broader range of financial services CxOs, but the Walters comments stuck out.

Simply put, genAI projects require human costs, expertise and ongoing maintenance that are often overlooked. Walters said:

"We spend a lot of time talking about the cost to build, about the training costs and the inference cost. But what we're seeing is the human capital associated with genAI is significant. Do not underestimate it. It's not just the initial build but how do you sustain these solutions. Prompt engineering is really critical, but we're finding there's a lot of work enhancing and maintaining models. The prompts with models is brittle and don't extend well. There's a maintenance cycle to re-engineer prompts. We're not talking about moving from GPT-4 to Claude. There's a lot of engineering even moving from GPT 3.5 to GPT 4.0.

The other aspect of human capital is the subject matter expert component. We have SMEs from the business that have to define what a good summary needs to look like. What's the ground truth around that? We don't have any label data and our SMEs are working with us to develop the ground truth. And then as we are producing outcomes, they're having to validate it and test it and develop accuracy metrics so we know it is safe to put in production. I think planning on that human capital is something we're not talking about."

Those words of wisdom deserve a callout since the technology sector isn't really talking about the human capital involved. And certainly, we're not hearing of the prompt engineering involved with swapping models. The other notable item was the role of humans in the loop have ongoing chores with validation.

Also see: Intuit’s Bet on Data, AI, AWS Pays Off Ahead of Generative AI Transformation | Rocket Companies’ strategy: Generative AI transformation in turbulent market

Thinking through the human costs is just one of the takeaways from Walters worth highlighting. Here are a few others from Walters' AWS talk in New York.

Building blocks that need to be in place before genAI

Walters said the generative AI journey is smoother if there are other transformational building blocks already in place.

Preprocessing data. Technologies like Optical Character Recognition (OCR) are still critical to get documents in digital form so the LLMs can read them. "A lot of the work is actually in that pre-processing," said Walters.

The cloud. "The cloud is a means to the end. It's not really the end, but has been a very important accelerator. We are in the middle of a very aggressive technology agenda focused on bringing the power of data and AI together to transform our end-to-end business," said Walters.

There were advantages to bringing analytics, data and technology ecosystems to the cloud. The benefits were faster product cadence and being able to spot areas that needed improvement.

Machine learning is the precursor. "We have several hundreds of models in production deployed across all of our business segments. Our business leaders have been able to see and feel not only the potential but how to put it to work," said Walters. "One of the most important investments we've made over the past few years with our move to AWS was MLOps, the machine learning equivalent of DevOps, and it allows us to automate and standardize the life cycle of a model."

Then it's AI before genAI. Walters said mature AI practices are a good start to scale into genAI. "There was an early tendency to treat genAI as something different, but you need platform, operating models and governance," she said.

Flexible platforms. "The foundational models are evolving daily so having the flexibility to get model choice, but having a modular ecosystem is critical. Plug and play is a necessity here more than we've ever seen before. The state of art today is not the state-of-the-art tomorrow," she said.

Foundation models are just one piece of a more complicated orchestra. "What we're finding is GenAI is often the smaller and maybe the easier piece. You need a platform that plays well with the rest of the ecosystem and integrates with data and AI services," said Walter.

Governance. Hartford has taken some existing governance frameworks and extended them to genAI, but the effort is in the experimental phase. Walters said she wants to automate governance, but there are challenges with the fluid regulatory environment.

Business buy-in. "Early in our journey we were focused on building buy-in on the art of the possible. We crossed that seven or eight years ago where our business leaders wanted to start investing more in machine learning and AI. From there the focus was on how do you scale," said Walters.

The willingness to experiment with discipline. She said:

"We are approaching generative AI with disciplined urgency. There's a lot of hype. There's a lot of noise. And the environment is changing minute by minute. So, we've really focused on being intentional about priorities and focus. Generative AI is just another tool in the toolkit--a very powerful tool. But it is one that we are validating that complements our existing AI capabilities well. It is allowing us to tap into unstructured data that was largely untapped by traditional models."

More on genAI:

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Smartsheet preps new pricing model June 24

Smartsheet preps new pricing model June 24

Smartsheet is planning to roll out a new pricing and packaging model on June 24 to replace the four pricing tiers it has now.

Today, Smartsheet has a free tier for 1 user and up to 2 editors, a Pro version with a maximum of 10 users and unlimited viewers for $7 per user/ month, a Business plan for $25 per user/month with a minimum of 3 users and unlimited editors and Enterprise accounts.

The new model will have broader access to Smartsheet features including AI and include more users in a plan. Speaking on Smartsheet's first quarter earnings call, CEO Mark Mader said the new model "pairs a greater number of licensed users with a lower price per user on business and enterprise plans."

New customers will have the new pricing model June 24 with existing annual customers transitioning 2025. Smartsheet members will be licensed instead of today's model of paid editors and free collaborators. Going forward, Smartsheet will have provisional member access, which enables people in an enterprise to create and contributed to a workflow before being added to a subscription.

Smartsheet's model change comes as Asana and Monday.com are rolling out AI features for workforce management. For instance, Asana launched AI teammates built on its Asana Work Graph and recently reported strong first quarter results. Asana delivered revenue growth of 13% in the first quarter to $172.4 million. Monday in May delivered fourth quarter revenue of $202.6 million, up 35% from a year ago.

​​​​​​​The previous model revolved around creators who made edits and contributed to processes. The new models are based on contributions so the number of platform users will grow. Every customer will also have access to AI on Smartsheets.

Mader said:

"This will drive increased virality by enabling organizations to make available to employees the full breadth of the platform in a low-friction manner while allowing system admins to manage their users more effectively. While existing customers will transition to the new subscription model with their renewal dates in 2025, we anticipate demand from some organizations wanting to benefit from the new subscription model sooner and we will accommodate them as appropriate."

Mader said Smartsheet is seeing strong adoption of its AI tools and nearly half of enterprise customer plans have used Smartsheet AI for business logic and content via prompts.

The bet for Smartsheet is that the new pricing model will create a flywheel where it has more users of its genAI tools and data for insights.

Details of the pricing model will be revealed later this month.

Smartsheet said that it has piloted new pricing plans with customers. The reaction from both large enterprises and SMBs has been positive. Mader said:

"When I think about how someone responds when you say, Mark, what does it cost to license Smartsheet? The first two things out of my mouth can't be it depends, so the clarity in this new model is super, super high."

Other key items:

  • Smartsheet delivered better-than-expected fiscal first quarter results. The company reported a first quarter net loss of $8.9 million, or 6 cents a share, on revenue of $263 million, up 20% from a year ago. Non-GAAP earnings were 32 cents a share.
  • For the second quarter, Smartsheet projected revenue of $273 million to $275 million with non-GAAP earnings of 28 cents a share to 29 cents a share. For fiscal 2025, Smartsheet sees revenue of $1.116 billion to $1.12 billion and non-GAAP earnings of $1.22 a share to $1.29 a share.
  • Scale is an issue for CIOs just as much as features. CIOs are looking for work management platforms to scale across diverse business units and needs with governance. "There's not a single customer that I've seen who is just willy-nilly approaching it on a feature set dimension," said Mader. "I think CIOs are recognizing that it is a diverse environment. There will be multiple tools, but where they're placing their bigger bets, they are doing that in fewer places."
  • Smartsheets saw a slow start to the first quarter in February as the enterprise pipeline grew and finished strong in April.

 

Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Tech Optimization Chief Information Officer Chief Experience Officer