Results

Dell Q2 strong as AI-optimized servers, workstations see demand surge

Dell Technologies reported better-than-expected second quarter results and said it saw strength in AI optimized servers, storage and workstations that can run AI workloads locally.

The company reported second-quarter earnings of 63 cents a share on revenue of $22.9 billion, down 13% from a year ago, but 10% higher than the first quarter. Non-GAAP earnings for the quarter were $1.74 a share.

Wall Street was expecting Dell Technologies to report earnings of $1.14 a share on revenue of $20.85 billion.

Dell Technologies, like HPE, said it was seeing strength as cloud providers and enterprise build infrastructure for AI workloads. Dell has a partnership with Nvidia for AI-optimized infrastructure. How AI workloads will reshape data center demand

Jeff Clarke, Dell's chief operating officer, said the company saw "a better demand environment" and AI "is already showing it's a long-term tailwind, with continued demand growth across our portfolio."

In prepared remarks, Clarke said Dell was cautious about the quarter, but demand "improved at a faster rate than we anticipated, particularly as we moved into June and July."

Clarke added that Dell executed well and was more selective on deals and pricing.

Dell has built out its lineup for generative AI gear and has validated designs with Nvidia. Other generative AI optimized products include Dell PowerEdge XE9680 servers and Dell Precision workstations with up to four Nvidia RTX 6000 Ada generation GPUs. 

Research:

Clarke said:

"From a solutions perspective, we saw significant strength in AI enabled servers. PowerFlex and PowerStore demand grew within our storage portfolio. PowerFlex, our proprietary software defined storage solution, has now grown for eight consecutive quarters, with demand in Q2 more than doubling year-over-year. Workstation demand grew and was another bright spot that will continue to benefit from the rise of AI. Developers and data scientists can now fine-tune Gen AI models locally before deploying them at scale."

The company's Infrastructure Solutions Group had second quarter revenue of $8.5 billion, down 11% from a year ago. Storage revenue was $4.2 billion with servers and networking revenue of $4.3 billion. Operating income was $1 billion.

"In Q2 alone, we saw unprecedented strength from our PowerEdge XE9680. It's the fastest ramping new solution in Dell history and builds on the success of other GPU enabled servers we have been selling for years," said Clarke.

Although Dell didn't provide an outlook, the demand surge for AI-optimized servers appears to be ongoing. AI servers were 20% of server order revenue in the first half of the year and Dell has $2 billion of XE9680 orders in backlog with a strong sales pipeline.

Dell's Client Solutions Group had second quarter revenue of $12.9 billion, down 16% from a year ago. Commercial client revenue was $10.6 billion and consumer revenue were $2.4 billion. Operating income was $969 million.

Clarke added that many AI workloads will be on-premises or at the edge due to latency, data security and costs. Customers are focused on using generative AI for customer operations, content creation, software development and sales.

Tech Optimization Data to Decisions Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity dell Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Cybersecurity platforms spar over data, generative AI, wallet share

Cybersecurity platforms are consolidating, and enterprise buyers are evaluating datasets, intelligence, integration and generative AI capabilities. What's unclear is the number of cybersecurity platforms that win.

The three next-gen cybersecurity platforms--Palo Alto Networks, Crowdstrike and Zscaler--all have AI capabilities, strong platforms, data signals and heady revenue growth. The elephant in the cybersecurity room is also clear: Microsoft.

Crowdstrike's second quarter earnings highlighted the moving parts well. Crowdstrike hasn't shied away from taking a few jabs at Microsoft. CEO George Kurtz noted:

A major auto manufacturer tried but failed to consolidate their security on Microsoft E5. This company's security team quickly realized Microsoft's complexity, multiple consoles, lack of integration, miss detections and complex deployments hampered their ability to defend themselves and consolidate. This customer is now consolidating on the Falcon platform with Falcon Complete for Endpoint, Identity and Cloud. Now with a single agent, single user interface and single platform, they have complete visibility across their end points, cloud and identities and the ability to stop threats in real time. By moving from expensive Microsoft E5 to CrowdStrike, organizations can save 50% plus per user per year on Microsoft licensing costs, adding up to millions of dollars of savings."

That quote landed just a few months after Crowdstrike's Investor Day where a section of the presentation was devoted to Microsoft and how the company wins 8 out of 10 times when an enterprise customer tests the two platforms.

Microsoft's Charlie Bell, Executive Vice President of Microsoft Security, was speaking at an investor conference the same day as Crowdstrike earnings. "I think we're one of the major beneficiaries of the consolidation move. We see healthy growth. We're now a million organizations protected, and that number grew by 26% last year," said Bell. "The number of customers who are using more than four workloads, that number has gone up by 33%. I think there's a lot of optimization that people were doing."

Bell added that Microsoft's AI efforts go beyond ChatGPT. "We often say security is a team sport. Well, within the AI world, building a copilot is this team sport. It's not just the LLM, it's specially trained models," explained Bell. "One of the beauties of being a cloud provider is you don't just get to see one environment, you get to see lots of environment. And so there's a data asymmetry that works to our advantage. We do 65 trillion signals a day processed within our products. And the fact that we have all that data, I think, is a huge advantage."

Crowdstrike has generative AI called Charlotte AI that promises to create virtual security analysts and help enterprises respond to threats faster. Charlotte AI, which will be priced in the weeks ahead, leverages Crowdstrike's data.

More: Palo Alto Networks: Takeaways from a Friday afternoon treatise

These cybersecurity platforms are arguing data is the differentiator since it can train models to read and react to incidents faster.

Palo Alto Networks will focus on "precision AI" that can't be wrong. "We have to build a lot of our own models. We have to train them. We have to collect first-party data. We have to understand the data. Today, we collect approximately 5 petabytes of data. Yes, 5 petabytes of data on behalf of our customers and analyze it for them to make sure we can separate signal from noise and take that signal and go create security outcomes for our customers," said CEO Nikesh Arora.

Crowdstrike's Kurtz was asked about how much of a generative AI and data moat the company has when the big guns are all talking the same game. Kurtz said curation of the data set matters as much as the petabytes involved. He said:

"It isn't just about the most data. You'll hear that from a lot of vendors. It's really about sort of the curated data set because when we think about generative AI, it actually has to be trained. We have a very well-defined training set that's annotated based upon all the threat hunting that we've done over the last 10 years. So we believe our 10-year head start in terms of having a data set that's actually curated is going to give us a distinct advantage of helping our customers. Then it's a foundational platform component, which is made available to every other service on the platform, which is different than others. We'll see how it all unfolds, but initial customer reaction has been very positive."

Bottom line: Four major security players are looking to blend data, signals, platforms and cybersecurity. These four can take business from smaller players for multiple quarters. What'll be interesting to see is how these cybersecurity giants take business from each other.

Constellation Research's take

Liz Miller, Constellation Research analyst, said:

"AI has long been touted as a potential savior for security, especially security operations centers that have long been overwhelmed by lackluster signals setting off an avalanche of alerts that are time consuming and tedious. In this regard the big players like Palo Alto and especially Microsoft are particularly well positioned with an expansive and comprehensive portfolio to train and fine tune models. However, where the training needs to focus is on automating the workflows around the work of security.

It may not be time to count out IBM in this AI for cyber mix. IBM is looking at everything from protecting the data that is now being randomly splashed into enterprise business and customer graphs along with powering risk analysis for incident summaries that are based on fine tuned, high-fidelity reports. IBM's managed services solutions including MDR and IDPS solutions are turnkey and most include their “X-Force” response team that now has AI added as an army of support.

But there is an even greater threat to security platforms and their wallet share given this AI evolution. Organizations have started to admit that they are diverting budget away from security transformation initiatives and shifting those dollars into AI initiatives that are driving revenue or saving money. Yes, AI has the potential to shift the cybersecurity posture and preparedness discussion completely and dramatically, but it also has the potential to sideline security initiatives. That is not where we should be today.”

Digital Safety, Privacy & Cybersecurity Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience infor Security Zero Trust AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Product Officer

Salesforce's Dream(force) is about Data Cloud, being your single source of truth

Here's all you need to know about Salesforce's strategy. On the company's second quarter earnings conference call, Data Cloud was mentioned 34 times, Dreamforce 31 times and generative AI 21 times.

Put the three together and you pretty much know what's coming at Dreamforce Sept. 12-14.

The bigger picture here is that Salesforce will double down on Data Cloud, which the company says is its fastest organic growing cloud and use it to expand wallet share. For a second there, I thought Salesforce CEO Marc Benioff was channeling Snowflake CEO Frank Slootman, who has said "enterprises and institutions alike are increasingly aware they cannot have an AI strategy without a data strategy."

Benioff took a more roundabout way to get to the data strategy part. Salesforce's take is that enterprise data strategy should be to unify on Data Cloud and then leverage the integration and generative AI tools with its other clouds.

ResearchHow Data Catalogs Will Benefit From and Accelerate Generative AI | Constellation ShortList™ Embedded Analytics Platforms for Cloud Applications | Analytics and Business Intelligence Evolve for Cloud, Embedding, and Generative AI

Benioff said (emphasis mine):

What you can see with Data Cloud is that customers must get their data together if they want to achieve success with AI. This is the critical first step for every single customer.

We're going to get this Data Cloud turned on as fast as we can and as easily as we can for every single one of our customers.

It not only has AI built in, but it's real time, it's automated, it's integrated with the core platform. It's not some separate Data Cloud. It's an integrated part of our platform in our metadata, in our core code, like our Sales Cloud, like our Service Cloud and, as you're about to assume seeing our new Marketing Cloud and Commerce Cloud and of course, our core application development capabilities all inside our Data Cloud.

And our data cloud is so deeply integrated as part of this core metadata architecture. It's allowing our customers to quickly action all of their data from any source without the costly integration project necessary with stand-alone data warehouses and data lakes, they've been forced to buy and create more islands of information and all of these independent systems and independent teams versus having one integrated data architecture.

We're moving our customers from having islands of data to having a single source of truth for all of their data. This is our greatest dream."

According to Salesforce, Data Cloud ingested 6 trillion records in the second quarter. Data Cloud was in five of Salesforce's top 10 deals in the quarter with FedEx being the biggest win.

Clearly for Salesforce, the game is stacking multiple clouds with Data Cloud being the linchpin that drives returns. Benioff said:

"As these clouds get stacked with these customers', attrition falls, customers become more successful, they develop a single source of truth. And our job is to get all of these things running on our core and getting all of these things ignited with artificial intelligence."

Will customers buy Salesforce's Data Cloud dream?

It's not exactly a new phenomenon that an enterprise vendor wants to be your single source of truth. The more data that is housed on a platform the more lock-in is created. Oracle, SAP, Salesforce, Microsoft and a bevy of others play a similar game. Toss in Snowflake and Databricks and there's big money in being the data platform of choice.

For Salesforce, it's critical that Data Cloud integrates its various services. Data Cloud is the avenue for Salesforce to sell more clouds. And the built-in integration with multiple Salesforce clouds may be well timed given enterprises are consolidating vendors. That native Data Cloud integration with Salesforce applications is likely to drive demand regardless.

However, the idea that Salesforce will be the single source of data truth is a bit farfetched. After all, data resides in multiple lakes, warehouses, databases and repositories across clouds. The game is able to tap into those data stores seamlessly. Salesforce knows this reality already and has partnered with Salesforce as well as the hyperscale cloud providers so customers can bring their own data.

My bet: Data Cloud will be the big Salesforce theme, but MuleSoft, which connects various systems and data, will be the secret sauce.

Previously:

Data to Decisions Marketing Transformation salesforce Big Data Chief Information Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer

Salesforce Q2 better than expected, outlook raised ahead of price increases

Salesforce reported better-than-expected second quarter results and raised its outlook ahead of its August price increases.

The company reported second quarter earnings of $1.28 a share on revenue of $8.6 billion, up 11% from a year ago. Non-GAAP earnings in the quarter were $2.12 a share. Wall Street expected Salesforce to report non-GAAP earnings of $1.90 a share on revenue of $8.53 billion.

In July, Salesforce said it would raise list prices across its clouds by about 9%. Those price increases started in August and weren't captured for the second quarter, which ended July 31.

Salesforce CEO Marc Benioff said the company is seeing improving demand in the second half of the fiscal year and raised its fiscal 2024 outlook for operating margins and cash flow.

On a conference call with analysts, Benioff said the company is committed to improving margins while investing in the future. Naturally, Benioff talked about AI. “We are at the dawn of a new AI innovation cycle. Every company will undergo an AI transformation with the customer at the center,” said Benioff.

He added that Dreamforce in September will feature Data Cloud advances that make it easier to access data with one architecture. Benioff took aim at data warehouses and silos and said it’s hard for customers to integrate. “Our goal is to make it easy for every customer to turn (Data Cloud) on,” said Benioff. “Customers must get their data correct if they want to move forward with AI.”

Benioff added that Salesforce is using its own AI internally and “trying to augment ourselves using Einstein.”

The company projected third quarter revenue of $8.7 billion to $8.72 billion, up 11% from a year ago. Non-GAAP earnings for the third quarter will be $2.05 a share to $2.06 a share. For fiscal 2024, Salesforce said revenue will be $34.7 billion to $34.8 billion, up 11%, with non-GAAP earnings $8.04 a share to $8.06 a share.

When Benioff was asked about whether Salesforce could grow wallet share, he said growth will start with AI even as sales cycles are longer than usual. “I really think Dreamforce will be a catalyst for customers to grow with us as they reignite their IT budgets,” said Benioff. International expansion and industries are also key growth avenues.

In the second quarter, Salesforce's revenue by cloud showed growth between 10% and 16%. Here's the breakdown.

  • Sales cloud revenue was $1.89 billion, up from $1.69 billion a year ago.
  • Service cloud sales were $2.05 billion, up from $1.83 billion a year ago.
  • Platform and other revenue were $1.64 billion, up from $1.48 billion.
  • Marketing and commerce cloud revenue was $1.24 billion, up from $1.21 billion a year ago.
  • Data revenue was $1.19 billion, up from $1.02 billion. Data revenue includes Tableau and MuleSoft.

By geographic region, Asia Pacific had the fastest growing revenue growth at 24% from a smaller base with growth in Europe at 11% and Americas at 10%.

Marketing Transformation Matrix Commerce Next-Generation Customer Experience Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Sales Marketing Digital Safety, Privacy & Cybersecurity Tech Optimization salesforce AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

UserTesting launches AI Insights Summary, marries generative AI with experience research

UserTesting launched UserTesting AI, tools that will provide customer experience insights and pain points, and AI Insights Summary, which will use GPT to put verbal and behavioral data in plain English.

AI Insights Summary, outlined at UserTesting's customer conference, is interesting because it enables UserTesting to use generative AI to feel your pain with new designs and experiences. UserTesting provides tools and research to improve product and customer experiences.

According to UserTesting, AI Insights Summary will provide summaries as well as evidence since each insight points back to source video and data.

Constellation Research analyst Liz Miller, checking in from UserTesting's THiS23 conference, said UserTesting's approach with AI Insights Summary can save companies money by limiting rework as designs are tweaked to improve experiences.

"One of the things CEO Andy MacMillan emphasized in his keynote here at THiS23 something that a lot of us in digital experience and design know all too well: the most costly part of the design process is the cost of reworking what didn’t quite land as expected," said Miller. "AI Insight Summary is a tool that can help address that issue."

Miller said AI Insight Summary's ability to synthesize multiple streams of data and bring evidence will enable a researcher, marketer, CX or UX team to double click into video and clips from research panels. These drilldowns can provide a lot of feedback and nuances to improve design.

"While lots of 'conversation summary' tools exist today--and many of them have been supercharged thanks to generative AI--this AI tool has also been trained in research," she said. "It is intentionally looking for those patterns, connections and actions. Imagine connecting a summary that outlines what a customer did, WHY they did and how they FELT while they were doing it."

Indeed, AI Insight Summary can show how many users completed a purchase, became frustrated and interacted with images and guides. AI Insight Summary processes verbal, design and behavioral data and then converts it to transcripts to identify anomalies and various findings.

Other key points about UserTesting AI and AI Insight Summary:

  • UserTesting AI highlights areas for improvement and friction points in workflows and processes in digital experiences.
  • AI will be embedded in UserTesting AI throughout the states of research processes with a focus on sentiment, intent analysis, interactive path flows and friction.
  • Research teams can leverage generative AI to focus on more strategic work.
  • AI Insight Summary is in beta.

Miller added that AI Insight Summary is a good indication of where experience research can go. "These are the complex processes in research and UX research that are ripe for AI innovation," said Miller. "These innovations are allowing designers and product owners to ask very different questions and establish processes where repeatable, trusted and verifiable research insights can just be built into the design lifecycle."

Next-Generation Customer Experience Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Marketing Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

The Driver Controls the Radio: GenerativeAI, Microsoft’s CoPilot and How AI is Learning the Rules of the Road

There is a fundamental road-trip rule: The driver controls the music.

The driver, you see, controls the car. The audio system is part of the car. Ergo, the driver controls the audio system. And she who controls the audio system controls the music. Can a passenger make requests? Yes. Can the passenger occupying that passenger seat be asked to manipulate settings or get us to the right playlist? Sure. But, in the end, the driver controls the music.

Today’s cars have everything from endless variety of curated playlists to digital whoopie cushions that can deploy a range of gas-based melodies, turning the once simple choice of suitable driving music into a potentially overwhelming extravaganza. I rely on my co-pilot’s capacity to listen, synthesize requests around the entire vehicle, make suggestions, but in the end to execute, not distract me from the job at hand.

As a marketer, a co-pilot for this wild ride known as customer engagement is exactly what I and so many of my CMO colleagues have expressed needing for their teams and for themselves. I’m not looking for someone to control the radio…just someone to help recommend some new playlists and time-saving options. What I absolutely don’t need is more complexity…more buttons to push, more apps to sift through and more Teams messages distracting me from the road ahead. I don’t need the promise of perfection, but I would welcome a shortcut or two.

This is where too many early applications of AI (and especially generative AI) have failed the average Marketer. In a time when we didn’t think being a marketer armed with data and technology could be any more complex, we added AI that can spit out commands about what time emails are opened and a dozen new subject lines we should be testing. Far too many AI tools sound like a mystical “easy button” that enter this world fully formed and matured, but in reality they are barely trained to understand language, let alone understanding business or marketing.

When the headlines started screaming how AI could take a marketer’s job, some marketers just laughed. Can any of you remember the last time we asked for something—an email delivery report, a segment analysis, an image for the home page—and we got the exact right thing we wanted the first time we asked for it? While marketers have been curious about the world of possible, many are also weary of all the promises unfulfilled and the implications of applications gone wrong, especially as early examples of AI malcontent have begun to emerge.

This is why I was a bit skeptical when taking a spin through the ongoing advancements Microsoft has been making with its Dynamics 365 Co-Pilots, now available for Sales, Customer Service and Marketing with a preview recently announced for Field Service. Did we really need another tool? Was AI going to make a marketer’s drive that much more enjoyable? Or could this new AI implementation take our eyes off of growth?

Almost immediately I came to an understanding that the CoPilots infused into the Microsoft universe, from Dynamics to Office, were truly there to sit in the passenger seat and work with the driver…not to distract. First things first…CoPilot is not something that a Microsoft Dynamics user HAS to use. Instead, CoPilot is something that can be toggled on. CoPilot also isn’t isolated to a single functional tool which is exceedingly important to Chief Marketing Officers in this age of driving enterprise-wide growth and engagement strategies. CoPilot thrives when cross functional data is unleashed and true insights about the customer, about opportunities and about the work at hand can be accessed.

Let’s take the act of identifying opportunities for growth, a strategy that typically starts with identifying a segment or cohort ripe for profitable engagement. While this sounds simple and like what systems have been able to do before generativeAI, the reality is that the questions marketers learned to ask where limited by their knowledge of the data structures underlying their analytics or engagement systems. Questions were structured around what was known and were actually rather difficult to craft. If we pull back the curtain on the realities of posing these types of queries before large language models and generative AI innovations, marketers would either need to rely on data and analytics teams or craft basic queries from pre-seeded templates. Questions that felt easy to answer demanded prompt engineering expertise and the ability to craft a query in SQL.

Segment building with CoPilot allows a marketer to just ask a question, in their natural pattern of speech, to identify a new segment. But CoPilot can actually take an additional step by including deeper insights to the response, delivering a look-alike segment that also includes details on customer lifetime value or known engagement or contact preferences. It takes "can I see a segment of prospects in the Los Angeles area" to "can I see a segment of prospects within the retail industry with headquarters in the Los Angeles area who have attended webinars and also have an average deal size of over 1 million year to date."

While CoPilot has easily stepped into the spotlight in recent Microsoft Dynamics centric announcements, what caught my eye even more was Microsoft’s move to center both the data AND the engagement layers marketing relies on by bringing their Dynamics 365 Marketing and Dynamics 365 Customer Insights into one solution, now dubbed Dynamics 365 Customer Insights. This shift brings all engagement and experience tools that had existed across the Marketing offerings into a natively integrated, robust customer data platform (CDP) that sits at the ready to ingest, harmonize and normalize customer data and the accelerate to shift to execution and engagement. It intentionally and profoundly connects the stores of data that represent knowledge of the customer with the systems of engagement that power experience delivery.

Yes…in the analogy of who controls the music, what Microsoft envisions here is better integration of car and fuel so that the AI-powered CoPilot has an even greater opportunity to connect to, synthesize and deliver recommendations and actionable intelligence that make the entire driving experience that much better. While the driver is still in total control, there is more readily available information, recommendations, and ambient experiences to make driving easier and perhaps more rewarding.

But for any driver…I mean Marketer…looking out at all these AI innovations, including Microsoft’s CoPilot features in Dynamics Customer Insight, now is the time to raise the question of strategy and deployment. AI, especially generative AI, still requires guiderails to ensure commercial viability and enterprise readiness. It demands a massive corpus of data to train and refine direction and outcomes. It will still require the brilliance, creativity and judgement of the marketer to turn questions and ideas into profitable strategies and executions. The goal of AI for modern marketing is to have a new breed of amazing interns/assistance/copilots, capable of a decision, content and insight velocity that can move us well beyond the limitations of human scale.

My recommendation to Marketers is to start driving with AI in the same way that you started driving a car…you learn, you practice. You don’t necessarily jump into a Ferrari and speed onto the highway on your first ever drive. Instead, you drove mom’s old car in circles in a massive parking lot until you built confidence, knowledge and skill. Start small, but keep scaling and know that not every suggestion will be a win…and not every AI-derived recommendation needs to be followed. These are the early days…the days when the training wheels most definitely stay on.

But these are also the days when you just roll down the window, ask your CoPilot to find something new that you can sing along to, stick your hand out into the wind and just enjoy the ride.

 

 

 

Data to Decisions Marketing Transformation New C-Suite Next-Generation Customer Experience Chief Customer Officer Chief Marketing Officer Chief Digital Officer

HPE sees Greenlake, edge strength in Q3

Hewlett Packard enterprise saw strong demand in the third quarter for its intelligent edge products and HPE Greenlake.

The company said HPE Greenlake ARR was up 48% in the third quarter compared to a year ago and intelligent edge, also known as HPE Aruba, saw revenue surge 50%.

HPE reported third quarter revenue of $7 billion, up 1% from a year ago, with earnings of 35 cents a share. Non-GAAP earnings for the quarter were 49 cents a share. Wall Street was looking for earnings of 47 cents a share on revenue of $7 billion

Antonio Neri, CEO of HPE, said "demand improved sequentially across all key business segments, with particular strength in our HPC & AI segment." At HPE Discover, the company outlined a bevy of GreenLake additions. The timing is notable since Constellation Research analyst Dion Hinchcliffe recently published a report outlining how CXOs are moving to private cloud models for cost savings. In a nutshell, public cloud providers haven't been passing on savings and encouraging enterprises to move workloads such as AI on premises.

Also see:

HPE has been able to offset slower compute demand with edge computing offerings.

By the numbers for the third quarter:

  • HPE's intelligent edge revenue was $1.4 billion, up 50% from a year ago.
  • The company's third quarter HPC and AI revenue was up 1% from a year ago to $836 million.
  • Compute revenue was $2.6 billion, down 13% from a year ago, and storage revenue was $1.1 billion, down 5% from a year ago.

As for the outlook, HPE projected fourth quarter revenue between $7.2 billion and $7.5 billion. HPE said non-GAAP fourth quarter earnings will be 48 cents a share to 52 cents a share. For fiscal 2023, HPE is projecting revenue growth to be between 4% and 6% with non-GAAP earnings between $2.11 a share and $2.15 a share.

Tech Optimization HPE greenlake SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer

HP: Demand 'has not improved as quickly as anticipated'

HP reported a mixed third quarter with sales lower than expected. CEO Enrique Lores said, "the external environment has not improved as quickly as anticipated, and we are moderating our expectations as a result."

The company reported third quarter revenue of $13.2 billion, down 10% from a year ago. HP reported earnings of 76 cents a share with non-GAAP earnings of 86 cents a share.

Wall Street was looking for HP to post earnings of 86 cents a share on revenue of $13.4 billion. The PC market malaise is expected to continue into 2024. According to IDC, 2023 PC shipments are forecast to decline 13.7% year over year to 252 million units.

IDC is expecting the PC market to grow again in 2024 but remain below 2019 pre-pandemic levels.

HP's results highlight how the PC and printing markets are bouncing along the bottom. On a conference call with analysts, Lores' message was that HP was controlling what it could. "We remain on track to deliver our cost savings targets," said Lores. 

Looking forward, Lores said that the PC market will get a boost from systems to train models locally. These low-latency systems will be "a significant driver of PC refresh" in 2024 and beyond. Lores added that HP will be outlining its innovation for more high-powered systems in the weeks to come. 

The company said personal systems revenue was $8.9 billion, down 11% from a year ago. Consumer revenue was down 12% and commercial sales fell 11%. HP delivered a personal systems operating profit of $592 million in the third quarter.

The printing business was also down from a year ago. HP said printing revenue was $4.3 billion, down 7% from a year ago. Consumer revenue was down 28% and commercial sales fell 6% from a year ago.

As for the outlook, HP projected non-GAAP earnings between 85 cents a share to 97 cents a share. For fiscal 2023, HP projected non-GAAP earnings to be $3.23 a share to $3.35 a share. Lores said HP expects pricing pressure and weak demand in China. "We view this moment as an opportunity to double down on the things we control," said Lores, who added HP will look for more cost savings. "We know how to manage the business strategically."

Here's a look at what HP outlined as growth areas.

Future of Work Data to Decisions Innovation & Product-led Growth New C-Suite Tech Optimization HP Chief Information Officer Chief Experience Officer

Google Cloud Next 2023: Perspectives for the CIO

I'm currently at the front row of the analyst area at Moscone North for the opening Google Cloud Next '23 keynote. While our Larry Dignan has covered all the bases with the major announcements this week, with generative AI-infused into nearly everything and much more besides, I'll be looking at all the announcements with an eye for what it means for Chief Information Officers (CIOs). 

Today AI and cloud are the very top of the IT agenda. Like never before, enterprise data, intelligence, and massive compute applied in innovative ways will define the very future of our organizations. The announcements here at Google Cloud Next will tell the tale on whether Google Cloud is fully prepared to take their customers on that journey. Let's take a look at what they're saying.

If you wish, skip right to the CIO takeaways for Google Cloud Next '23.

The Google Cloud Next 2023 Keynotes: Blow-by-Bow

9:05am: Sundar Pichai arrives onstage talking about Thomas Kurian also coming on stage shortly and talking about digital transformation. Fast-forward four years, Google Cloud is one of the top enterprise companies in the world.

Sundar speaks about companies wanting a cutting-edge partner in cloud, and now a strategic partner with AI. He says Google has been working for 7 years to have an AI-first approach. Backdrop currently shows the PaLM-E language model, their enterprise LLM.

Sundar Pichai at Google Cloud Next 23

Pichar describes using generative AI to re-imaging the search experience, which they call Search Generative Experiences, or SG for short. You may even have had Google search ask you to try using SG lately. He says feedback from Google search users has been great so far.

Google has been for years to help deploy AI at scale. He knows that CIOs are on the hot seat to deliver gerneative AI today. Now cites how General Motors is using conversational AI in OnStar. How HCA Healthcare is using Google's medical domain LLM, Med-PaLM, to provide better care. United States Steel is using generative AI to summarize and extract information from repair manuals.

Sundar notes that these early use cases only scratch the surface. He lauds their Vertex AI initiative, then about their large inventory of different foundation models to provide rich choice in how companies use AI to get work done.

He announces that one million people are now using Duet AI in Google Workspace. Have been implementing rapid changes in the product from the outset and fast improvement using plenty of feedback.

Then Sundar announces that the general availability of Duet AI within Google Workplace is officially today, August 29th, 2023.

Now speaks about the important enterprise topic of Responsible AI, security and safety as well as their AI principles and best practices. Google is working hard to make sure users can more easily identify when generative AI content is being showed online, watermarked if needed, including invisible watermarks that don’t alter the content of images and video. Sundar says they are the first cloud provider to enable AI digital watermarking on images.

Bold and responsible is Sundar's message: â€œI truly believe we are embarking on golden age of innovation” building on his previous comment that AI is one of the biggest revolutions in our lifetimes.

Thomas Kurian at Google Cloud Next 23

9:17am: Now welcomes Thomas Kurian on stage. Kurian starts off, notably, thanking the many organizations working together with Google to bring Generative AI to market.

Says Google Cloud offers optimize environments for AI with Vertex AI. Everyone can innovate with AI he says. And every org can succeed in adopting AI.

Vertex AI and Duet AI are the big focus areas of what Thomas is speaking on. Cites a number of larger companies using Google Cloud like Yahoo! And Mahindra. 

Kurian cites the brand-new GKE Enterprise, which creates an new more integrated container platform, making it even easier for organizations to adopt best practices and principles that Google has learned from running their global Web services. GKE Enterprise brings powerful new team management features. Kurian says it's now simpler for platform admins to provision fleet resources for multiple teams. He makes no mention of Anthos, but it's part of the same story.

He then announces new availability of a very powerful new A3 GPU supercomputer instance powered by NVIDIA H100 GPUs, needed for today's largest AI workloads and for Google to stay in at the high-end of the generative AI game.

Then Kurian introduces the Titanium Tiered Offload Architecture. Offloads are dedicated hardware that perform essential behind-the-scenes security, networking, and storage functions that were previously performed by the server CPU. This allows the CPU to focus on maximizing performance for customer workloads. 

Then comes the 5th annnouncement, which is big private cloud news to support the on-prem cloud conversations that the industry has been having lately. The new solution is Google Distributed Cloud, for sovereign and regulatory needs (he doesn’t mention performance or control, but those are also notable use cases.)

Finaly 6th, he tips the hat on Google's new Cross Cloud Interconnect, for security and speed across “all clouds and SaaS” providers.

Now NVIDIA CEO, Jensen Huang, is up. He says they’re going to put their powerful AI supercomputer, DGX Cloud -- billed as an optimized multi-node AI training factory as-a-service -- right into Google Cloud.

My take: It's very smart to show off their very close relationship with NVIDIA. Kurian says they use NVIDIA GPU to build their next generation products. Kurian now asks Huang how NVIDIA is doing. Huang says â€œbreakthrough, cutting-edge computer science. A whole new way of delivering computering. Reinventing software. Doing this with the highly-respected JAX and OPEN-XLA. To push the frontier of large language models. To save time, scale-up, save money, and save energy. All required by cutting-edge computing. He paints a compelling picture of an AI leader who knows where they are going and so it's key for Google Cloud to be seen as a strategic partner.

Huang announcies PAX-ML, building on top of JAX and Open XLA, a labor of love and groundbreaking for configuring and running machine learning experiments on top of JAX.

Says large teams at NVIDIA are building the next generation of processors and infrastructure. NVIDIA is working hard on the DGX-GH200, a massive system that can handle a trillion parameters, based on a revolutionary new "superchip" called Grace Hopper

Thomas Kurian Google Cloud Next 23 Platform

”Google is a platforms company at the heart of it. What to attracts all the devs who love NVIDIA products to create new products" says Kurian. Clear that he wants to make sure how it's clear how close NVIDIA is to Google Cloud and the special nature of the relationship to bring some of NVIDIA latest innovations to Google's cloud and AI stacks.

Now Kurian is talking a bout Vertex AI and Is seeing “very rapid growth”, listing a whole swath of companies building with Vertex AI. Interesting how  Vertex AI offers a robust and easy to select from AI “model garden”.  My take: Model choice is going to be a very big area for the large AI clouds to compete on and Google basically has the edge.

Then Kurian introduces the new PaLM 2 model, a major upgraded introduced in May, Has 3x the token input length, which is fast becoming another key metric that foundational models are fiercely competing on, as it determines the amount of input and output can be processed by the model as a whole, determining how complex a business or technical task can be handled. This is especially important in many of the highest impact scientific, engineering, and medical scenarios.

Kurian now touting that Google Cloud now supports over 100 AI foundation models, noting they are constantly adding many of the very latest new models, showing these off as a proof point that their AI model garden has one of the richest set of choices currently available.

He talks about about Google Cloud's strict control, protection, and security of enterprise data in VPC and private data for AI.

Now Nenshad Bardolliwalla, an old fellow Enterprise Irregular, is onstage talking about using 1st party models, along with Anthropic's new Claude 2 model and Meta’s LlaMA 2 right in the Vertex AI model garden.

9:40am: “Today if you go into the Vertex Model Garden that Llama 2 is available today”. Huge cheer from the audience.

32K is the token input limit for most leading edge AI models these days. About. 80 pages of text. Nenhad then inputs the entire Department of Motor Vehicles Handbook into a Llama 2 prompt. 

Then he asks Llama 2 to summarizer all the rules around pedestrians. Llama 2 does this in mere seconds.

Now Nenshad moves on to computer vision. Generates some images of trucks for the DMV web page. Now shows an example of using enterprise brand style and images to do style-tuning on the fly. Now images “will never go out of style.”

Shifts over to Google's new AI digital watermarking. There are real challenges with them, but Google things they've handled them sucesfuly in rich media, but not in text. Notes that pixel based watermarking often disturbs the image. Works right in Vertex AI. He shows watermarked images that have totally invisible watermarks.

Kurian back on stage data talking about search and their new Gounding service in Vertex AI to ensure factual results and reduce hallucinations. My take: Groundings are a very important advance in making generative AI ready for higher maturity enerprise use cases. Also has an Embeddings service. Vertex AI conversation also makes it easy to have threaded conversations inside business applications in multiple language.

Now Kurian explores the new Vertex AI Extensions to connect models, search, and third party applications. Nenshad gives the demo. Says can build a search and conversational app in two minutes. Goes into Vertex AI and goes to ‘Create a new app’’ Can specify advanced capabilities to activate enterprise features and ability to query model. Announces, Jira, Salesforce, and Confluence as sources for enterprise data for these apps, which holds the promise to transform IT and sales processes. Nenshad works on building a drivers license flow from the DMS. Can use simple text outlines to then build conversational apps in.just a few minutes. My take: This will iindeed accerate building apps on top of enterprise content sources along with very simple to create narrative journeys the app should support using that knowledge. Now offers citations so that data is grounded.

Nenshad now shows the app working. Says “this is the ability to bring the power of our powerful consumer-grade site directly to your enterprise apps.”

he demonstrates that Vertex Search and Conversation can build some amazing apps in surprisingly short time that have a lot of intelligence in them.

9:56am: Kurian emphasizes that Vertex AI can ensure that “AI is used in everything that you do.”

Duet AI and Google Workspace

Now they talking about the decade long journey in developing AI features of Google Workspace.

Over 300 new features for Google Workspace launches recently. They say the product investments are paying off. Claims more and more customers are switching to Google Workspace wholesale, with 10 million paid customers today.

Will enhance Duet Ai to go from conversational to contextual. Duet AI will soon look at whatever you are doing and proactively suggesting improvements. My take: Will have to be careful about interruptive, but says can even take action on your behalf, if you want, presumably without interrupting work.

Demonstrates building a polished creative brief using Duet AI with text and relevant graphics, along with charts from the correct Google Sheet. 

Shows a feature for people who join a Google meeting late benefit from the Duet AI “catch up” feature that will show late arrivals what happened in the meeting before they got there.

And Duet AI can even attending a meeting on your behalf, which Larry Dignan explored today on Constellation Insights. The Attend for Me features will be very interesting depending on how interactive it is.

Now directly addresses the AI elephant in the room, Kurian says "no data never leaves from users, departments, or organizations. Your data is your data. Google Cloud is uniquely positioned to ensure their models don’t learn from your data“ A very important strategic point, and data safety and sovereignty  will be a very important capability to guarantee for many organizations though likely hard to prove.

10:07am: Duet AI in Apigee can make it easy to design, create, and publish your APIs.

Duet AI can refactor code from legacy to modern languages. On screen is full of language database C++ code. They will migrate the legacy C++ code right to the Go language onstage. The result uses Cloud SQL. It indeed takes seconds to take this important database connector. Converted the database connection to a cloud managed database. Duet AI is training on Google Cloud specific products and best practices.

Duet AI can understand the structure and meaning of your enterprise data deeply as well, and not just generate code. It will also pull out specific functionality in a company’s code base, “without comprising quality.” Duet AI can modernize and migrate code in a highly contextually-aware way. Definitely a major shift in performance and likely quality for overhauling and modernizing legacy code bases, especially to become more cloud-ready, even cloud-native.

Kevin Mandia from Mandiant cames out to explore cybersecurity capabilities, including secure-by-design. Cites how Google Cloud’s security vulnerabilities rates significantly better than two other major hyperscalers. Again, critical for the CISO to sign-off on activating powerful generative AI capabilities From their cloud providers.

Key Google Cloud Next Takeaways for CIOs

So what are the key takeaways from the Google Cloud Next keynote for those in the CIO role:

  • Google has true enterprise-grade AI. Vertex AI and Duet AI should be regarded as leading generative AI capabilities, compete with the latest large token sizes to tackle non-trivial business use cases. Each offering is worthy of serious enteprise-wide consideration for both AI platform and app development as well as end-user AI enablement within Google Workspace and inside 3rd party business apps.
  • Google Cloud has a unique AI value proposition. Differentiation in Google Cloud’s generative AI capabilities for enterprises specifically lies in a) best-in-class foundation model choice with their model garden, including the very latest competitively significant models, b)  vital new “grounding” features to ensure AI results are factual and as accurate as possible, and c) versatility to securely run AI workloads in virtually all the ways that enterprises require, from on-prem to public multicloud, and D) sophisticated safety, privacy, and IP protection features, including born-cloud cybersecurity, strict Responsible AI compliance, and advanced digital watermarking features.
  • Enterprise data is fundamentally safe with AI in Google Cloud. Critical for enterprise usage, Vertex and Duet AI always sandbox enterprise data within an enterprise’s virtual private cloud within Google Cloud in a highly secure way. They promise that their models never train themselves permanently on enterprise data. Though they must, in the moment, analyze enterprise data to answer queries, but always in a private, temporary way. Organizations are still advised to trust but verify this, however.
  • Google Cloud has a very strong AI ecosystem play. Google Cloud can demonstrate many proof points that it is building one of the leading ecosystems of leading AI partners and technology providers, including those from leading foundation/ large language models and cutting-edge advanced AI hardware. The NVIDIA partnership is particularly important and will ensure Google Cloud can bring the latest and most powerful new AI technologies to bear long-term for their AI customers.
  • Google's AI offerings are designed for rapid, pervasive, and strategic value. With Vertex AI and Duet AI, Google Cloud provided strong evidence it is delivering a major step towards enabling organizations to quickly put AI everywhere in their organization wherever they need it. They can do it extraindariily easily and safely, particularly given the versatile and easy-to-use app generation capabilities demonstrated today.
  • Google Cloud's total AI offering is among best-in-class for the enterprise. In this analyst’s assessment, CIOs can be assured Google Cloud’s AI capabilities are current state-of-the-art in enterprise-grade AI. Google Cloud also demonstrated that they have a vision, plan, and many strategic partnerships to ensure they will remain so in the foreseeable future (which, however, may not be that far given the tech's fast-moving pace.)
  • There's safety in a leading enterprise vendor, and AI has real risks. While CIOs can theoretically achieve some performance and innovation advantages by also leveraging the intense innovation taking place today in the AI open source space, it would only be at significant risk. This is perhaps the most central Google Cloud value proposition: It’s usually  better to wait for Google Cloud to use its scale and expertise to apply its Responsible AI compliance as well as security and privacy reviews as it continues to seek to offer the most choice in model by figuring out how to incorporate the often less-safe public AI technologies into their Vertex AI fold.
Chief Information Officer

Google Cloud Next everything announced: Infusing generative AI everywhere

Google Cloud launched a series of updates, products and services designed to embed artificial intelligence and generative AI throughout its platform via Vertex AI, which is focused on builders, and Duet AI for front-end use cases.

The themes from Google Cloud at Google Cloud Next in San Francisco are use cases beyond IT, making it easier for developers to create with generative AI and large language models (LLMs) and driving usage throughout its services.

Google Cloud CEO Thomas Kurian said the game plan is to enable better framing of models, faster storage and infrastructure and tools to make AI more efficient and distributed all the way to the edge. Kurian added that it's critical to provide services that can address multiple use cases. Kurian also outlined customer wins and partnerships with GE Appliances, MSCI, SAP, Bayer, Culture AM, GM, HCA and others. 

During a keynote, Kurian cited customer wins and projects. A few include:

  • Yahoo is migrating 500 million mailboxes and 550PB of data to Google Cloud.
  • Mahindra used Google Cloud for a traffic surge when it sold 100,000 SUVs in 30 minutes during its online car buying launch. 
  • Fox Sports is using Google Cloud to find clips in natural language as well as its models. 

Google Workspace’s generative AI overhaul: Is ‘Attend for me’ the killer app?At Google I/O 2023, Google Cloud launches Duet AI, Vertex AI enhancements | Generative AI features starting to launch, next comes potential sticker shock

"There are a lot of solutions being deployed in different ways across industries," said Kurian, who added content creation is a use case, as is training models for specific tasks and automating multiple functions from back office and production to customer service.

Ray Wang, CEO of Constellation Research, said:

“Every enterprise board is asking their technology teams the same question, ‘When will we be taking advantage of Generative AI to create exponential gains or find massive operational efficiencies?’ Customers are looking for vendors that can deliver not just generative AI but overall, AI capabilities. When we talk to senior level executives, they are all trying to figure out if they will have enough data to get to a precision level that their stakeholders will trust. So far, Google has shown that they are taking a much more thoughtful approach from chip to apps on AI than some other competitors.”

Here's a look at everything Google Cloud outlined at Google Cloud Next.

Infrastructure

Google Cloud's big themes on infrastructure are platform-integrated AI assistance, optimizing workloads and building and running container-based applications. To that end, Google Cloud said Duet AI is now available across Cloud Console and IDEs. There's also code generation and chat assistance for developers, operations, security and data and low-code offerings.

The company also outlined the following for container-based applications:

  • Google Kubernetes Engine (GKE) Enterprise.
  • Cloud Run Multi-Container Support.
  • Cloud Tensor Processing Units (TPUs) for GKE.

Google Cloud also launched new versions of its TPUs (TPUv5e) and A3 supercomputer based on Nvidia H100 GPUs, purpose-built virtual machines and new storage products--Parallelstore, Cloud Storage FUSE. Those announcements are designed for customers looking for infrastructure built for AI deployments.

Cloud TPU v5e supports both medium-sale training and inference workloads. 

For traditional enterprises, Google outlined a series of new offerings--Titanium, Hyperdisk, Cross-Cloud networking and new integrated services on Google Distributed Cloud.

Databases

Google Cloud announced an AI-version of its AlloyDB database. With a series of launches, Google Cloud is looking to leverage its databases to make it easier for enterprises to run data where it is, provide a unified data foundation and create generative AI apps.

How Data Catalogs Will Benefit From and Accelerate Generative AI

The breakdown:

  • AlloyDB AI, which will support vector search, in-database embedding, full integration with Vertex AI and open source generative AI tools.
  • AlloyDB Omni, which is in preview. AlloyDB Omni is a downloadable edition of AlloyDB that can run on multiple clouds such as Google Cloud, AWS and Azure, on-premises and on a laptop. Omni delivers 2x faster transactional and up to 100x faster analytics queries compared to standard PostgreSQL, which is becoming more popular in the enterprise.
  • Duet AI in databases to provide assistive database management and automation for migrations.
  • Spanner Data Boost, which will offer workload isolated processing of operational data without impacting production systems.
  • Memorystore for Redis Cluster, an open-source compatible scale-out database.

AI

Google Cloud made moves to create an integrated portfolio of open foundational models and tuning options. AWS hit similar themes lately as cloud giants see the ability to curate and offer foundational models as table stakes for enterprises.

Google Cloud outlined the following:

  • Foundation model improvements and expanded tuning for PaLM (text and chat), Imagine, Codey and Text Embeddings.
  • Meta's Llama 2 and Anthropic's Claude 2 will be available in the Vertex AI Model Garden. Google Cloud has more than 100 models in its Vertex AI Model Garden. Meta's Llama 2 and what that means for GenerativeAI
  • Med-PaLM is now available for healthcare LLM use cases. 
  • Grounding for PaLM API and Vertex AI Search. Grounding was a key theme for Google Cloud executives because enterprises need high quality output when they layer in their data for specific use cases.
  • Vertex AI Search and Conversation general availability, which includes major updates for generative search, image search and prompting with LLMs.
  • Collab Enterprise on Vertex AI, an enterprise focused notebook experience with collaboration tools.

Analytics

Google Cloud outlined a series of data analytics tools that aim to enable enterprises to interconnect data, bring AI to your data and boost productivity. The themes from Google Cloud rhyme with industry developments from Databricks, a partner, along with MongoDB, Salesforce and a bevy of others.

Databricks Data + AI Summit: LakehouseIQ, Lakehouse AI and everything announced | MongoDB launches Atlas Vector Search, Atlas Stream Processing to enable AI, LLM workloads

Key items include:

  • Open Lakehouse, an AI data platform that aims to work across all data formats and adding Hudi and Delta as well as fully managed Iceberg tables. There will also be cross-cloud joints and views in BigQuery Omni and one dashboard in Dataplex to data and AI artefacts.
  • BigQuery ML, which will bring generative AI to enterprise data by using Vertex Foundation models directly on data in BigQuery. BigQuery ML inference engine will run predictions and Vertex Model and imports from TensorFlow, XGBoost and ONNX.
  • BQ Embeddings and vector indexes including support for embeddings and vector indexes in BigQuery and synchronization with Vertex Features Store.
  • BigQuery Studio, which will get a unified interface for data engineering, analytics and machine learning workloads.
  • Duet AI in Looker and BigQuery for analysis, code generation and data workload optimization.

Security Cloud

Google Cloud moved to add Duet AI throughout its security offerings. The breakdown includes:

  • Duet AI in Mandiant Threat Intelligence, which will use generative AI to improve threat assessments and create threat actor profiles.
  • Duet AI in Chronicle Security Operations, which adds expertise to users.
  • Duet AI in Security Command Center to bolster risk assessments and recommend remediation.
  • Mandiant Hunt for Chronicle to combine front line intelligence with data.
  • Platform security advancements for detection, network security and digital sovereignty.

Workspace

The big theme here was making Google Workspace AI-first and embedding Duet AI features throughout the platform.

Among the key items:

  • Duet AI add on available Sept. 29. This new SKU will be available on a trial basis with pricing to be detailed later.
  • Duet AI side panel, which will provide generative AI collaboration tools across the Workspace apps.
  • Google Meet with Duet AI to improve visuals, sound and lighting as well as meeting management tools.
  • Duet AI in Chat to provide updates and suggestions across Workspace apps.
  • Zero trust and digital sovereignty controls automatically classify and label data for compliance and encryption.

Constellation Research’s take

Constellation Research analyst Doug Henschen said:

“I’m mostly eager to see the demos and previews move into early trials and general availability. I’m sure early adopters will find out what works and what doesn’t, and they make some unexpected discoveries about gen AI that vendors didn’t foresee. Gen AI certainly has the potential to change analytics and BI as we know it very quickly, but it's time for reality to catch up with the promises.

Duet AI for both BigQuery and Looker is a potential game changer as it promises to make things easier for analysts and business users alike with natural-language-to-SQL generation, auto recommendations based on query context, and chat interactions with your data. Google execs say they are 'radically rebuilding Looker' with capabilities such as auto-generated slide presentations potentially replacing dashboards and promising a 'massive change in how Looker is used, and by whom.' I have yet to see generally available products, but there’s a palpable sense that the gen AI capabilities promised by Google, Microsoft and others may finally make analytics and BI broadly accessible and understandable to business users.

Openness and gen AI advances are the two big themes on the analytics front. To improve openness to third-party sources and clouds, Google BigQuery now supports Delta, Hudi and Iceberg table formats while Big Query Omni is gaining cross-cloud joins and materialized views. On gen AI -- beyond the addition of Duet AI to both BigQuery and Looker -- Google is integrating Big Query with Vertex AI, via a new BigQuery Studio interface, so there’s a single experience for data analysts, data scientists and data engineers. The integration between BigQuery and Vertex AI will also expose Vertex Foundation models directly to data in BigQuery for custom model training. Finally, Google is bringing Vertex AI into the Dataplex data catalog to provide unified access and metadata management over all data, models and related assets. This promises to improve data and model access and governance for all constituents and should help to accelerate the development of gen AI capabilities.

Microsoft partnered with Open AI to accelerate what it could do with AI, but in doing so it picked a fight with a formidable competitor in Google. Google initially had to react to Microsoft’s announcements earlier this year, but the company had a deep well of AI assets and expertise to draw on and I still see it as the leader among all three clouds in the depth and breadth of its AI capabilities, now including gen AI.”

Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Google Google Cloud SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer