Results

Fortinet picks up Lacework to beef up its Fortinet Security Fabric

Fortinet is acquiring Lacework as the cybersecurity platform battle heats up.

Lacework specializes in Cloud-Native Application Protection Platform (CNAPP). Fortinet said it will add Lacework's AI-powered platform to its Fortinet Security Fabric. Lacework has more than 1,000 customers.

According to Fortinet, Lacework will bring an agent and agentless architecture for data collection, a homegrown data lake and code security offering. By integrating Lacework's CNAPP into Fortinet's lineup, the company is betting that its more complete AI cybersecurity stack will woo enterprises looking to consolidate vendors. 

Palo Alto Networks earlier this year set off the debate with a plan to bet that it could be the leading cybersecurity platform. Although the company said it has seen strong interest from customers, it's far too early to say the debate is settled.

Cybersecurity platformization: What you need to know | CrowdStrike delivers strong Q1 amid cybersecurity platform debate

Terms of the deal weren't disclosed. Constellation Research analysts Chirag Mehta said the Fortinet move gives it a portfolio that will compete with Palo Alto Networks offerings. In a post on X, Mehta noted that Fortinet will have a much broader portfolio to take on Palo Alto Networks.

Here's how the combination stacks up:

Speaking on Fortinet's first quarter earnings call, CEO Ken Xie said the company betting it can win amid the vendor consolidation.

"I think during the slowdown of the macro environment, competitors started to be more aggressive with discounts. But from all angles, we see we have much better product position, much broad like infrastructure coverage and better service, and also both on the performance angle. The product definitely has performed much better for the same function, same cost, and same time. It is more about how we can increase the coverage, increase the lease and pipeline, and also to meet the customer need in this big environment change."

 

Digital Safety, Privacy & Cybersecurity Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Privacy Officer

Apple's genAI strategy: On-device processing, private cloud, own the integration and abstract the LLMs

Apple CEO Tim Cook outlined the company's generative AI strategy at Apple WWDC that revolves around Apple Intelligence, a personal intelligence system, on-device processing of large language models and a private cloud model to go along with an OpenAI partnership.

Apple's tagline was that Apple Intelligence is "AI for the rest of us."

The stakes were high going into Apple's WWDC as Wall Street and the tech sector were closely watching how the company would approach generative AI as Google, Microsoft and a bevy of other technology giants have been regularly launching new features. In the end, Apple's real mission with generative AI was to spur another iPhone upgrade cycle. There was enough meat with Apple's strategy to give customers an excuse to upgrade. 

Cook set up the AI approach to generative models. He said:

"We've been using artificial intelligence and machine learning for years. Recent developments in generative intelligence and large language models offer powerful capabilities that provide the opportunity to take the experience of using Apple products to new heights. As we look to build on these new capabilities, we want to ensure that the outcome reflects the principles at the core of our products. It has to be powerful enough to help with the things that matter most to you. It has to be intuitive and easy to use. It has to be deeply integrated into your product experiences. Most importantly, it has to understand you and being grounded in your personal context, like your routine, your relationships, your communications and more."

Craig Federighi, SVP of Software Engineering, said the approach to Apple Intelligence is to go horizontal and systemwide. 

"With iOS 18, iPad OS 18 and macOS Sequoia, we are embarking on a new journey to bring you intelligence that understands you. Apple intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad and Mac."

Apple Intelligence capabilities

Apple Intelligence will provide large language model (LLM) capabilities in the background systemwide for writing, prioritizing and tackling daily tasks. Apple Intelligence will also provide image capabilities with personalization tools by contacts.

Personal context was a key talking point for Federighi. "Understanding this kind of personal context is essential for delivering truly helpful intelligence. but it has to be done right. You should not have to hand over all the details of your life to be warehoused. and analyzed in someone's AI cloud with Apple intelligence, powerful intelligence goes hand in hand with powerful privacy," said Federighi.

Apple also outlined Apple Intelligence foundation models and favorable comparisons to other LLMs when running on Apple hardware. Apple said in its developer state of the union talk that anything running on Apple's cloud or device is its own models. 

Not surprisingly, Apple went for the killer app of generative AI and it's problem GenEmoji, which can create emojis on the fly based on whatever you cook up.

What Apple is really doing is creating an abstract layer that keeps the experience and device integration while leveraging LLMs underneath. See: Foundation model debate: Choices, small vs. large, commoditization

Architecture is all about private cloud

Apple's plan is to process Apple Intelligence queries on device with a semantic index. Apple said much of the AI processing will be on device, but some will go to servers in a system called Apple Private Cloud Compute.

A request will be analyzed to see if it can be done on device. Only data that's required to fulfill a request will be sent to a server, run on Apple Silicon.

Federighi explained:

"We have created Private Cloud Compute. Private cloud computing allows Apple intelligence to flex and scale its computational capacity and draw on even larger server base models for more complex requests, while protecting your privacy. These models run on servers we've specially created using Apple silicon. These Apple silicon servers offer the privacy and security of your iPhone from the silicon on, draw on the security properties of the Swift programming language and run software with transparency built in.

When you make a request, Apple Intelligence analyzes whether it can be processed on device if it needs greater computational capacity. It can draw on private cloud compute, and send only the data that's relevant to your task to be processed on Apple's silicon servers. Your data is never stored or made accessible to Apple. It's used exclusively to fulfill your request. And just like your iPhone, independent experts can inspect the code that runs on the servers to verify this privacy promise. In fact, private cloud computing cryptographically ensures your iPhone, iPad and Mac will refuse to talk to a server unless its software has been publicly logged for inspection. This sets a brand-new standard privacy and AI."

Siri's brain transplant

Apple Intelligence will give Siri a new brain and leverage settings data and other device information. Siri will take actions inside apps on your behalf and across apps.

Siri will move through the system to find a set of actions for various intentions across applications and know your personal context.

Apple said it will continue to roll out Siri features based on data already held. Siri has been configuring settings and asking questions for years. This repository to date has basically served as a personal context engine that can now be surfaced via Apple Intelligence.

The goal for Apple is to make Siri more intelligent, helpful and integrated with you.

Many of the features across Apple native apps can be found elsewhere. Apple Intelligence essentially looked like a spin on Google's Gemini, OpenAI's ChatGPT or Microsoft's Copilot. The big difference is the architecture used--Apple Silicon to Apple Silicon in a data center--and a horizontal approach that's more Amazon Q than app specific.

Federighi said Apple Intelligence will be free across its iPhone, iPad and Mac devices with the latest OS updates. ChatGPT will be free on Apple devices without an OpenAI account. There will be upgrade opportunities for OpenAI with new models on tap in the future.

Key points about the ChatGPT integration:

  • ChatGPT will be available in Apple's systemwide tools and native apps. 
  • IP addresses sent to OpenAI are obscured and requests aren't stored. 
  • ChatGPT's data-use policies only apply for users that connect accounts. 
  • Apple will use GPT-4o for free without creatin an account. 
  • Paid features will be available to OpenAI subscribers. 

For developers, Apple said it will layer LLMs into various SDKs.

"This is the beginning of an exciting new chapter of personal intelligence, intelligence built for your most personal products. We are just getting started," said Federighi.

Apple's AI strategy came at the end of what was a series of incremental software updates across its collection of operating systems. Here's a look at what's being added to select Apple platforms.

Constellation Research's take

Constellation Research CEO Ray Wang said "the main differential Apple has is their philosophical view." 

"Privacy at the core means Apple designs AI to be mindful. The AI must work for the user first not the network. It's a different way of looking at what AI can do," said Wang. "Apple is late but they have time to do it right."

In addition, Apple's 1 billion devices in the field make a great model training set.

Constellation Research analyst Andy Thurai said that Apple's AI strategy is different because of the privacy approach, custom processors, hybrid approach and multimodal features that will go mainstream. "Apple's doubling down on its core value of privacy, even in the AI age," said Thurai in a LinkedIn post. "This is a MAJOR differentiator from other vendors who often prioritize data collection over user protection."

Constellation Research's Holger Mueller said:

"Apple's new AI capabilities are not only a few years behind what Google users have available, but now also what Microsoft users can do. Creating a ‘private’ cloud in the public cloud is the price Apple has to pay to keep the ‘fig leaf on Cook’s ‘differential privacy’ going. The OpenAi deal maybe the backdoor to bolster Apple Intelligence with super big LLM for real world awareness – which is a hint that Apple sees Google more and more like a competitor."

Vision OS 2

Apple said Apple Vision Pro will roll out to more countries including China, Japan and Singapore June 13 for preorders. Customers in Australia, Canada, France, Germany and UK can preorder June 28. With the rollout, Apple is betting on more sales and distribution for the 2,000 apps designed specifically for Apple Vision Pro.

The company's spatial computing efforts have a heavy dose of entertainment and video updates with Vision OS 2, but there was a bevy of business updates.

Apple said utility apps such as AirLauncher, GlanceBar, Splitscreen, Screens 5, and Widgetsmith and easier pairing will boost productivity in Vision OS 2. Apple also rolled out new frameworks and APIs to build 3D apps, anchor apps to flat surfaces and enable use cases by industries.

iOS 18

iOS 18 updates included personalization, support for RCS so Android users won't be singled out and bullied for having green bubbles and incremental features across native apps notable Photos, Maps and Messages.

Watch OS

Apple is adding training features to measure training load, recovery time and performance during individual workouts. These features have been in Garmin devices for years.

iPad OS

iPad OS will get a floating bar at the top with many of the customization features in iOS 18. Documents will be surfaced more easily by application. There were a few Apple Pencil enhancements worth noting. Adding space in for inserting words into handwriting was a nice touch.

Mac OS Sequoia

Mac OS gets many of the features found in iOS and iPad OS as well as tools such as Continuity to make the hand off between Apple devices more seamless such as iPhone Mirroring. Keychain was also updated for better password management.

More:

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity apple AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Event Report: IBM Think Accelerates Accessible Gen AI For Clients

Get the live analysis on IBM Think and the implications for customers. Constellation Research principal analyst and CEO, R "Ray" Wang shares his analysis of the announcements from IBM Think and what it means for customers starting off in their Generative AI journey.

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/wzuBgnRHieg?si=KYyyFJNAK2POVcgO" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

AI infrastructure is the new innovation hotbed with smartphone-like release cadence

Don't look now, but AI-optimized servers and infrastructure may be the most trendy innovation corner of technology. And the release cycle looks like it was ripped out of the Apple and Samsung playbooks.

For those that need a refresher since the smartphone industry has been boring in recent years, here's the cadence the sector used to revolve around.

  1. Apple announces new software plans at WWDC.
  2. Apple launches new iPhone with integrated stack, new processors and updates that are billed monumental but actually have been used in Samsung devices for years.
  3. Tech buyers gobble up whatever iProduct comes out.
  4. Rinse and repeat year after year and collect money with a dash of complimentary products and services.

For you Android folks, you can swap Samsung or Google instead of Apple.

That playbook is still being utilized, but let's face it: Smartphones are a bit of a yawner these days. The new hotspot is AI infrastructure, specifically AI servers.

Nvidia also happens to be the new Apple with an integrated stack of hardware, software and ecosystem to build AI factories and train large language models (LLMs).

At Computex, Nvidia said it will move to an annual cycle of GPUs and accelerators along with a bunch of other AI-optimized hardware. "Our company has a one-year rhythm. Our basic philosophy is very simple: build the entire data center scale, disaggregate and sell to you parts on a one-year rhythm," said Nvidia CEO Jensen Huang.

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly and is brought to you by Hitachi Vantara.

Constellation Research analyst Holger Mueller said: "Nvidia is ratcheting up the game by going from one design in 2 years to one design per year. This cadence is a formidable challenge for R&D, QA and sourcing in a supply chain that's already constrained. We will see if AI hardware is immune from new offerings stopping the sale of the current offerings."

Just a few hours later after Nvidia’s keynote, AMD CEO Lisa Su entered the Computex ring with her own one-year cadence and roadmap. AMD, the perennial No. 2 chipmaker in most categories, is going to rake in gobs of money being the alternative to Nvidia.

The GPU may just be the new smartphone that drives tech spending. This AI stack also has a downstream effect on Nvidia partners such as Huang fave Dell Technologies as well as Supermicro and HPE.

You'd never know it from the stock fall in the last week, but Dell has been kinda cleaning up the AI server category. Supermicro is doing well too. These OEMs will ride along with the annual GPU cadence. And now HPE is joining the parade as enterprises buy AI systems.

Jeff Clarke, Chief Operating Officer at Dell Technologies, said the backlog for AI-optimized servers was up 30% in the first quarter to $3.8 billion. The problem is the margins on those servers aren't up to snuff yet.

Lenovo is also seeing strong demand. Kirk Skaugen, President of Lenovo's Infrastructure Solutions Group, said the company's "visible qualified pipeline" was up 55% in its fiscal fourth quarter to more than $7 billion. Note that visible pipeline isn't backlog.

"In the fourth quarter, our AI server revenue was up 46%, year-to-year. On-prem and not just cloud is accelerating because we're starting to see not just large language model training, but retraining and inferencing," said Skaugen. "We'll be in time to market with the next-generation NVIDIA H200 with Blackwell. This is going to put a $250,000 server today roughly with eight GPUs, will now sell in a rack like you're saying up to probably $3 million in a rack."

Clarke noted that Dell Technologies will sell storage, services and networking around its AI-optimized servers. Liquid cooling systems will also be in a hot area. He said:

"We think there's a large amount of storage that sits around these things. These models that are being trained require lots of data. That data has got to be stored and fed into the GPU at a high bandwidth, which ties in network. The opportunity around unstructured data is immense here, and we think that opportunity continues to exist. We think the opportunity around NICs and switches and building out the fabric to connect individual GPUs to one another to take each node, racks of racks across the data center to connect it, that high bandwidth fabric is absolutely there. We think the deployment of this gear in the data center is a huge opportunity."

Super Micro Computer CFO David Weigand has a similar take. "We've been working on AI for a long time, and it has driven our revenues the past two years. And now with large language models and ChatGPT, its growth has obviously expanded exponentially. And so, we think that will continue and go on," said Weigand. "We think it's going to be both higher volume and higher pricing as well, because there is no doubt about the fact that accelerated computing is here to stay."

HPE's latest financial results topped estimates and CEO Antonio Neri said enterprises are buying AI systems. HPE's plan is to differentiate with systems like liquid cooling, one of three ways to cool systems. HPE also has traction with enterprise accounts and saw AI system revenue surge accordingly. Neri said the company's cooling systems will be a differentiator as Nvidia Blackwell systems gain traction.

Neri said: "We have what I call 100% or liquid cooling. And this is a unique differentiation because we have been doing 100% direct liquid cooling for a long time. Today, there are six systems in deployment and three of them are for generative AI. As we go to the next generation of silicon and Blackwell systems will require 100% direct liquid cooling. That's a unique opportunity for us because you need not only the IP and the capabilities to cool the infrastructure, but also the manufacturing side."

Indeed, there's a reason we're watching Huang and Su keynotes and dozing off as yesterday's innovation juggernauts speak at various conferences. AI infrastructure is just more fun.

And just to bring this analogy-ridden analysis home, it's worth noting that Intel spoke at Computex too. Yes, Intel is playing from behind on AI accelerators and processors, but is playing a role in the market. The role? Midmarket player for the most cost-conscious tech buyer.

If Nvidia is Apple. And if AMD is more like Google/Samsung. Then Intel is positioned to play Motorola and the not-quite premium phone role when it comes to AI. At Computex, Intel CEO Pat Gelsinger went through the cadence of AI PC possibilities and new Xeon server chips. And then Intel said this about the company's Gaudi 3 accelerators, which should be available in the third quarter.

"A standard AI kit including eight Intel Gaudi 2 accelerators with a universal baseboard (UBB) offered to system providers at $65,000 is estimated to be one-third the cost of comparable competitive platforms. A kit including eight Intel Gaudi 3 accelerators with a UBB will list at $125,000, estimated to be two-thirds the cost of comparable competitive platforms."

ASUS, Foxconn, Gigabyte, Inventec, Quanta and Wistron join Dell, HPE, Lenovo and Supermicro with plans to offer Intel Gaudi 3 systems.

Bottom line: AI systems will create a big revenue pie even if most of the spoils go to Nvidia. "I'm very pragmatic about these things. Today in generative AI, the market leader is Nvidia and that's where we have aligned our strategy. That's where we have aligned our offerings," said Neri. "Other systems will come in 2025 with other accelerators."

More on genAI:

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia Big Data SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Don't forget the non-technical, human costs to generative AI projects

Don't forget the non-technical costs with generative AI projects as there are a bevy of ongoing maintenance to consider, said Lori Walters, Vice President, Claims and Operations Data Science at The Hartford.

Speaking at an Amazon Web Services (AWS) financial services event for analysts this week, Walters provided several takeaways about generative AI efforts and how they fit into the broader picture. I'll have more takeaways from a broader range of financial services CxOs, but the Walters comments stuck out.

Simply put, genAI projects require human costs, expertise and ongoing maintenance that are often overlooked. Walters said:

"We spend a lot of time talking about the cost to build, about the training costs and the inference cost. But what we're seeing is the human capital associated with genAI is significant. Do not underestimate it. It's not just the initial build but how do you sustain these solutions. Prompt engineering is really critical, but we're finding there's a lot of work enhancing and maintaining models. The prompts with models is brittle and don't extend well. There's a maintenance cycle to re-engineer prompts. We're not talking about moving from GPT-4 to Claude. There's a lot of engineering even moving from GPT 3.5 to GPT 4.0.

The other aspect of human capital is the subject matter expert component. We have SMEs from the business that have to define what a good summary needs to look like. What's the ground truth around that? We don't have any label data and our SMEs are working with us to develop the ground truth. And then as we are producing outcomes, they're having to validate it and test it and develop accuracy metrics so we know it is safe to put in production. I think planning on that human capital is something we're not talking about."

Those words of wisdom deserve a callout since the technology sector isn't really talking about the human capital involved. And certainly, we're not hearing of the prompt engineering involved with swapping models. The other notable item was the role of humans in the loop have ongoing chores with validation.

Also see: Intuit’s Bet on Data, AI, AWS Pays Off Ahead of Generative AI Transformation | Rocket Companies’ strategy: Generative AI transformation in turbulent market

Thinking through the human costs is just one of the takeaways from Walters worth highlighting. Here are a few others from Walters' AWS talk in New York.

Building blocks that need to be in place before genAI

Walters said the generative AI journey is smoother if there are other transformational building blocks already in place.

Preprocessing data. Technologies like Optical Character Recognition (OCR) are still critical to get documents in digital form so the LLMs can read them. "A lot of the work is actually in that pre-processing," said Walters.

The cloud. "The cloud is a means to the end. It's not really the end, but has been a very important accelerator. We are in the middle of a very aggressive technology agenda focused on bringing the power of data and AI together to transform our end-to-end business," said Walters.

There were advantages to bringing analytics, data and technology ecosystems to the cloud. The benefits were faster product cadence and being able to spot areas that needed improvement.

Machine learning is the precursor. "We have several hundreds of models in production deployed across all of our business segments. Our business leaders have been able to see and feel not only the potential but how to put it to work," said Walters. "One of the most important investments we've made over the past few years with our move to AWS was MLOps, the machine learning equivalent of DevOps, and it allows us to automate and standardize the life cycle of a model."

Then it's AI before genAI. Walters said mature AI practices are a good start to scale into genAI. "There was an early tendency to treat genAI as something different, but you need platform, operating models and governance," she said.

Flexible platforms. "The foundational models are evolving daily so having the flexibility to get model choice, but having a modular ecosystem is critical. Plug and play is a necessity here more than we've ever seen before. The state of art today is not the state-of-the-art tomorrow," she said.

Foundation models are just one piece of a more complicated orchestra. "What we're finding is GenAI is often the smaller and maybe the easier piece. You need a platform that plays well with the rest of the ecosystem and integrates with data and AI services," said Walter.

Governance. Hartford has taken some existing governance frameworks and extended them to genAI, but the effort is in the experimental phase. Walters said she wants to automate governance, but there are challenges with the fluid regulatory environment.

Business buy-in. "Early in our journey we were focused on building buy-in on the art of the possible. We crossed that seven or eight years ago where our business leaders wanted to start investing more in machine learning and AI. From there the focus was on how do you scale," said Walters.

The willingness to experiment with discipline. She said:

"We are approaching generative AI with disciplined urgency. There's a lot of hype. There's a lot of noise. And the environment is changing minute by minute. So, we've really focused on being intentional about priorities and focus. Generative AI is just another tool in the toolkit--a very powerful tool. But it is one that we are validating that complements our existing AI capabilities well. It is allowing us to tap into unstructured data that was largely untapped by traditional models."

More on genAI:

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Smartsheet preps new pricing model June 24

Smartsheet is planning to roll out a new pricing and packaging model on June 24 to replace the four pricing tiers it has now.

Today, Smartsheet has a free tier for 1 user and up to 2 editors, a Pro version with a maximum of 10 users and unlimited viewers for $7 per user/ month, a Business plan for $25 per user/month with a minimum of 3 users and unlimited editors and Enterprise accounts.

The new model will have broader access to Smartsheet features including AI and include more users in a plan. Speaking on Smartsheet's first quarter earnings call, CEO Mark Mader said the new model "pairs a greater number of licensed users with a lower price per user on business and enterprise plans."

New customers will have the new pricing model June 24 with existing annual customers transitioning 2025. Smartsheet members will be licensed instead of today's model of paid editors and free collaborators. Going forward, Smartsheet will have provisional member access, which enables people in an enterprise to create and contributed to a workflow before being added to a subscription.

Smartsheet's model change comes as Asana and Monday.com are rolling out AI features for workforce management. For instance, Asana launched AI teammates built on its Asana Work Graph and recently reported strong first quarter results. Asana delivered revenue growth of 13% in the first quarter to $172.4 million. Monday in May delivered fourth quarter revenue of $202.6 million, up 35% from a year ago.

​​​​​​​The previous model revolved around creators who made edits and contributed to processes. The new models are based on contributions so the number of platform users will grow. Every customer will also have access to AI on Smartsheets.

Mader said:

"This will drive increased virality by enabling organizations to make available to employees the full breadth of the platform in a low-friction manner while allowing system admins to manage their users more effectively. While existing customers will transition to the new subscription model with their renewal dates in 2025, we anticipate demand from some organizations wanting to benefit from the new subscription model sooner and we will accommodate them as appropriate."

Mader said Smartsheet is seeing strong adoption of its AI tools and nearly half of enterprise customer plans have used Smartsheet AI for business logic and content via prompts.

The bet for Smartsheet is that the new pricing model will create a flywheel where it has more users of its genAI tools and data for insights.

Details of the pricing model will be revealed later this month.

Smartsheet said that it has piloted new pricing plans with customers. The reaction from both large enterprises and SMBs has been positive. Mader said:

"When I think about how someone responds when you say, Mark, what does it cost to license Smartsheet? The first two things out of my mouth can't be it depends, so the clarity in this new model is super, super high."

Other key items:

  • Smartsheet delivered better-than-expected fiscal first quarter results. The company reported a first quarter net loss of $8.9 million, or 6 cents a share, on revenue of $263 million, up 20% from a year ago. Non-GAAP earnings were 32 cents a share.
  • For the second quarter, Smartsheet projected revenue of $273 million to $275 million with non-GAAP earnings of 28 cents a share to 29 cents a share. For fiscal 2025, Smartsheet sees revenue of $1.116 billion to $1.12 billion and non-GAAP earnings of $1.22 a share to $1.29 a share.
  • Scale is an issue for CIOs just as much as features. CIOs are looking for work management platforms to scale across diverse business units and needs with governance. "There's not a single customer that I've seen who is just willy-nilly approaching it on a feature set dimension," said Mader. "I think CIOs are recognizing that it is a diverse environment. There will be multiple tools, but where they're placing their bigger bets, they are doing that in fewer places."
  • Smartsheets saw a slow start to the first quarter in February as the enterprise pipeline grew and finished strong in April.

 

Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Tech Optimization Chief Information Officer Chief Experience Officer

ServiceNow's Roadmap as a Platform Company | Interview with CSO Nick Tzitzon

ServiceNow's business process reengineering: "We exist to clean up the mess."

Constellation Research founder R "Ray" Wang catches up with ServiceNow Chief Strategy and Corporate Affairs Officer Nick Tzitzon during #Knowledge2024 to discuss ServiceNow's latest advancements, efficiencies, and opportunities as a platform company.

📌 Topics include use cases, margin compression, the adoption of #AI in business, and gateways to growth in the industry.

This is one convo you don't want to miss! ⬇

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/9ENs2kKXdXw?si=ESmcXKpB_trnypnT" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

AI: Marketing Strategy, ESG, Servers | ConstellationTV Episode 81

Welcome to ConstellationTV episode 81! 🎉 Hear co-hosts Doug Henschen and Larry Dignan analyze the latest enterprise #technology news (#AI servers, Snowflake's new CEO, #earnings).

Then watch interviews about AI's impact in different spheres of business - Liz Miller with Tara DeZao of Pegasystems about the convergence of #marketing strategy and AI, then Larry sits down with Sustainability 50 executive Sandeep Chandna of Tech Mahindra to learn about #AI transforming #ESG initiatives.

0:00 - Introduction: Meet the hosts
1:42 - Enterprise technology news coverage
10:27 - CR #CX Convos: When Marketing Strategy Meets AI
20:49 - Sustainability 50 Interview with Sandeep Chandna
31:08 - Bloopers!

ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/VAFShrTY6Fs?si=ebn6cvvW56d3Pk-X" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Cisco and Nvidia: Networking partners or frenemies?

Cisco at its Cisco Live conference outlined a partnership with Nvidia to launch Nexus HyperFabric AI Clusters, a data center networking architecture for generative AI deployments.

In a release, Cisco billed HyperFabric as a breakthrough. The combination includes Cisco networking with Nvidia GPUs and AI software. The effort is aimed at enterprises looking to deploy generative AI on premises.

On the surface, this Cisco-Nvidia partnership sounds swell, but I don't expect it to end all that well. Cisco and Nvidia look like classic frenemies to me.

Nvidia's uncanny knack for staying ahead

Nvidia's roadmap unveiled at Computex featured a heavy dose of networking. Specifically, Nvidia is going after Ethernet genAI deployments. Here's what Nvidia CEO Jensen Huang said on the company's first quarter earnings call.

"Strong networking year-on-year growth was driven by InfiniBand. We experienced a modest sequential decline, which was largely due to the timing of supply, with demand well ahead of what we were able to ship. We expect networking to return to sequential growth in Q2. In the first quarter, we started shipping our new Spectrum-X Ethernet networking solution optimized for AI from the ground up.

It includes our Spectrum-4 switch, BlueField-3 DPU, and new software technologies to overcome the challenges of AI on Ethernet to deliver 1.6x higher networking performance for AI processing compared with traditional Ethernet.

Spectrum-X is ramping up in volume with multiple customers, including a massive 100,000 GPU cluster. Spectrum-X opens a brand-new market to NVIDIA networking and enables Ethernet only data centers to accommodate large-scale AI. We expect Spectrum-X to jump to a multibillion-dollar product line within a year."

In short, Nvidia is InfiniBand today and Ethernet tomorrow. Cisco and Nvidia's HyperFabric effort, which is expected to be available in early 2025, includes the following:

  • Cisco cloud management software.
  • Cisco 6000 series switches.
  • Cisco Optics family of modules.
  • Nvidia AI Enterprise and Nvidia NIM inference microservices.
  • Nvidia GPUs, BlueField-3 SuperNIC and Nvidia reference designs.
  • VAST Data Platform.

On Cisco's May third quarter earnings conference call, CEO Chuck Robbins said the company had confidence that it would book $1 billion in AI revenue for fiscal 2025. "We believe we are well-positioned to be the key beneficiary of AI enterprise application proliferation with the breadth of our portfolio and the vast amounts of data we see," said Robbins.

Here's a look at Nvidia's three-year roadmap, which features a lot of switches.

In a Computex press release, Nvidia noted: "NVIDIA Spectrum-X, the first Ethernet fabric built for AI, enhances network performance by 1.6x more than traditional Ethernet fabrics. It accelerates the processing, analysis and execution of AI workloads and, in turn, the development and deployment of AI solutions."

Nvidia is selling Spectrum-X to cloud providers today, but rest assured the enterprise will be a target too. For now, Nvidia needs Cisco's sales channel. Tomorrow it may not and Nvidia's networking efforts as well as partnerships with Cisco rivals all could wind up a threat.

It's also worth noting that Cisco and Nvidia are targeting on-prem deployments for generative AI. Speaking at an investor conference, Cisco CFO Scott Herren said the two parties can both grab a nice chunk of the AI pie.

Herren said:

"If you're an enterprise and you're looking for inferencing or low-level training, this (HyperFabric) is what you need, AI in a box for you.

Nvidia came to us for a couple of reasons. One is our expertise in managing enterprise networks and the significant footprint we have. But also due to the enterprise reach that we have from a sales standpoint."

Today, the Cisco-Nvidia efforts are nascent. How this partnership develops over time will be fascinating to watch.

Constellation Research's take

Constellation Research analyst Holger Mueller said:

"Cisco and Nvidia are betting that AI will run on premises and are constructing a joint offer to allow enterprises to run AI in their corporate data centers. Nvidia has control of that market, as when it makes available GPUs, this will be an option. At the moment all GPUs go to cloud vendors who pay top dollar for them. Likely when AMD and Intel offerings will hit the road - Nvidia will make GPUs for on prem available. A setup where training is in the cloud and runtime in on premise / or localized on premises learning is also likely. At the same time the partnership puts pressure on Nvidia future foes AMD and Intel on the networking side. But Cisco will not be afraid to partner with Nvidia rivale neither - no partnerships are exclusive these days. That competition is great for enterprises as it increases choice."

 

Tech Optimization Data to Decisions cisco systems nvidia Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Zoholics 2024: Zoho aims to democratize CRM, infuse AI across platform, court developers, create cybersecurity stack

Zoho outlined a more collaborative vision of customer relationship management, launched a series of AI-driven enhancements across its platform, upped its developer game and created a cybersecurity stack.

The barrage of announcements at the company's Zoholics 2024 conference in Austin is uniquely Zoho. The company is pushing the status quo in enterprise software and aiming to democratize various workflows and processes with its affordable stack.

Here's a look at the announcements from Zoholics 2024.

Zoho CRM for Everyone

Zoho previewed Zoho CRM for Everyone, which is aimed at bringing CRM to all teams involved in customer operations.

Although sales teams are the custodian of customer relationships, Zoho CRM for Everyone is aimed at coordinating all the other teams involved in delivering customer value.

These teams include solutions engineering, contract management, sales enablement, customer onboarding and advocacy. The coordination and communication of the various customer teams can improve customer journeys and mitigate risks.

Constellation Research analyst Liz Miller said Zoho CRM for Everyone recognizes that an entire enterprise is involved with driving revenue. "This is a solution that meets the modern sales and engagement strategy, reaching beyond the stagnant notion that CRM is a record or a database…it puts CRM at the center of the project called Revenue," said Miller.

Mani Vembu, Zoho Chief Operating Officer, said CRM access has historically been rationed and that has created silos that hamper customer experiences.

The components of CRM for Everyone aimed at breaking down the silos include:

  • Team Modules: Business teams can create team level data modules by themselves with IT governance. Fields, permissions and  workflow automation can be customized by teams. The modules are held in a dedicated team space with customer context and processes.
  • Requester Profile: When a team needs a deliverable from a different team they can place a request, track status and collaborate.
  • A new user experience: Zoho revamped its Zoho CRM interface for better usability across roles and functions. Users can switch between models and use no-code and low-code experiences to manage workflows.

Zoho CRM for Everyone is available for customers requesting early access. New capabilities will be added over the next several weeks.

Constellation Research analyst Doug Henschen said:

"CRM for Everyone is clearly the big announcement here. There have been many attempts to reimagine CRM over the years. Zoho is providing blank-slate support to customize for cross-functional teams and processes across all who touch customers, starting with marketing. Despite Zoho's always-value-leading pricing, I'm sure it won't necessarily be easy or a quick win to convince companies to extend CRM far beyond sales. But Zoho is disciplined and always plays the long game. They'll stick with the "For Everyone" push, iterating and tweaking to show customers the value in supporting cross-team communication and collaboration around customer interactions." 

Also see:

Indeed, the vision for CRM is notable. 

Collaboration enhancements with AI, workflow automation, industries

Zoho updated its collaboration portfolio to focus on unified project management, asynchronous collaboration across global enterprises, partners and industries. These updates have been added to Zoho Project, Zoho Notebook, Zoho WorkDrive and Zoho Sign. 

Raju Vegesna, Zoho's Chief Evangelist, noted that collaboration across multiple time zones, devices and networks are an enterprise pain point. He added that Zoho has created platform services for AI, unified search and process automation to fuel collaboration.

Here's a look at the new collaboration additions that are generally available in Zoho Projects, Zoho WorkDrive, Zoho Sign and Zoho Notebook:

  • Zoho Projects now has natural language processing capabilities from Zia, Zoho's AI engine. Zia can summarize charts and dashboards to generate task recommendations.
  • Zoho Notebook uses Zia for notetaking, summarization, task management and tagging of topics among other things. Zoho Notebook is integrated into Zoho Projects and WorkDrive.
  • Blueprint, a visual workflow automation tool, is now available in Zoho Projects, WorkDrive and Sign. Blueprint provides materials and context as processes are created.
  • Zoho WorkDrive now includes workflow automation to automate content procedures across departments and teams.
  • Zoho Sign now offers the ability to create reusable templates for orders, sales orders and various documents.

While collaboration enhancements went broad across Zoho's platform, the company also went deep with workflows developed for industries. Here's a look:

  • Construction: Zoho Projects can troubleshoot issues remotely with Zoho Lens integration. Zoho Lens is an augmented reality remote assistance tool.
  • Healthcare: Zoho WorkDrive gets new Data Loss Prevention (DLP) security controls that can be used to flag and classify sensitive data.
  • Manufacturing: Blueprint in Zoho Projects can be used to chart and manage process pipelines and automate steps.
  • Other industries including Aviation can also build out custom tools.

Constellation ShortListâ„¢ Enterprise File Sharing and Cloud Content Management

Zoho ups developer game with Catalyst by Zoho, Zoho Apptics

At Zoholics 2024, the company also overhauled its developer tools with new pro-code services and native app analytics.

With the enhancements, Zoho is targeting application teams and professional developers looking to create custom tools. Zoho has also tightly integrated Catalyst and Apptics, which is focused on privacy.

  • Catalyst, which integrates with the Zoho ecosystem and third-party applications, has been upgraded with the following tools:
  • Signals, which routes events from Zoho services, third parties and custom apps to handlers using publishers and subscription rules.
  • NoSQL Database for storing structured, semi-structured and unstructured data across data types.
  • Slate, a managed front ends so developers can build customized interfaces.
  • CI/CD Pipeline, which automates tests and builds continuous delivery pipelines.

Zoho Apptics, which is generally available, consolidates analytics across app usage, performance, engagement and growth in one console. The unified view is designed for everyone involved in application development and management.

Other tools in Zoho Apptics includes:

  • Ability to prompt Android and iOS users for ratings and updates.
  • Tools that prioritize data privacy and security.
  • Analytics support across multiple platforms with web analytics coming soon.
  • Catalyst has a free tier and pay-as-you-go or subscription-based plans. Apptics has a free plan and pro plan starting at $62 a month if billed annually.

Zoho integrates its security offerings into one stack

Zoho launched a security tech stack that includes its secure browser Ulaa, Zoho Directory, Zoho OneAuth, and Zoho Vault.

As Zoho combined its security tools, it also upgraded key features including:

  • Ulaa now features machine learning powered phishing detection as well as crypto mining detection in addition to user privacy tools.
  • Zoho Directory has been upgraded with conditional access and routing policies. Users can also upload their own encryption keys using Bring Your Own Key. Enterprises can also authenticate enterprise Wi-Fi and VPNs with Zoho Directory Cloud RADIUS.
  • OneAuth now offers Smart Sign-in with a secure QR code. Additional tools make it easier to enforce multi-factor authentication (MFA) across organizations. Admins can also kill unauthorized sessions.
  • Zoho Vault can detect breached passwords, create compliance reports and store confidential data. Vault also offers browser extensions, mobile apps and desktop apps.

Constellation Research analyst Chirag Mehta said:

"With the rise of sophisticated cyberattacks, organizations are actively seeking solutions to enhance their proactive information security measures. Zoho's new security stack is a significant advancement in meeting these critical needs by offering robust protection against tracking, breaches, and attacks, ensuring both security and productivity for businesses. By integrating multi-factor authentication, secure password management, a privacy-first browser, and workforce identity and access management, Zoho aims to provide a comprehensive security approach. This holistic strategy helps safeguard sensitive data, build customer trust, and ultimately protect the organization's bottom line."

More on Zoho:

Data to Decisions Future of Work Marketing Transformation Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization zoho Security Zero Trust ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Customer Officer Chief Information Officer Chief Marketing Officer Chief Analytics Officer Chief Information Security Officer Chief Privacy Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Product Officer