Results

HPE sees Greenlake, edge strength in Q3

HPE sees Greenlake, edge strength in Q3

Hewlett Packard enterprise saw strong demand in the third quarter for its intelligent edge products and HPE Greenlake.

The company said HPE Greenlake ARR was up 48% in the third quarter compared to a year ago and intelligent edge, also known as HPE Aruba, saw revenue surge 50%.

HPE reported third quarter revenue of $7 billion, up 1% from a year ago, with earnings of 35 cents a share. Non-GAAP earnings for the quarter were 49 cents a share. Wall Street was looking for earnings of 47 cents a share on revenue of $7 billion

Antonio Neri, CEO of HPE, said "demand improved sequentially across all key business segments, with particular strength in our HPC & AI segment." At HPE Discover, the company outlined a bevy of GreenLake additions. The timing is notable since Constellation Research analyst Dion Hinchcliffe recently published a report outlining how CXOs are moving to private cloud models for cost savings. In a nutshell, public cloud providers haven't been passing on savings and encouraging enterprises to move workloads such as AI on premises.

Also see:

HPE has been able to offset slower compute demand with edge computing offerings.

By the numbers for the third quarter:

  • HPE's intelligent edge revenue was $1.4 billion, up 50% from a year ago.
  • The company's third quarter HPC and AI revenue was up 1% from a year ago to $836 million.
  • Compute revenue was $2.6 billion, down 13% from a year ago, and storage revenue was $1.1 billion, down 5% from a year ago.

As for the outlook, HPE projected fourth quarter revenue between $7.2 billion and $7.5 billion. HPE said non-GAAP fourth quarter earnings will be 48 cents a share to 52 cents a share. For fiscal 2023, HPE is projecting revenue growth to be between 4% and 6% with non-GAAP earnings between $2.11 a share and $2.15 a share.

Tech Optimization HPE greenlake SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer

HP: Demand 'has not improved as quickly as anticipated'

HP: Demand 'has not improved as quickly as anticipated'

HP reported a mixed third quarter with sales lower than expected. CEO Enrique Lores said, "the external environment has not improved as quickly as anticipated, and we are moderating our expectations as a result."

The company reported third quarter revenue of $13.2 billion, down 10% from a year ago. HP reported earnings of 76 cents a share with non-GAAP earnings of 86 cents a share.

Wall Street was looking for HP to post earnings of 86 cents a share on revenue of $13.4 billion. The PC market malaise is expected to continue into 2024. According to IDC, 2023 PC shipments are forecast to decline 13.7% year over year to 252 million units.

IDC is expecting the PC market to grow again in 2024 but remain below 2019 pre-pandemic levels.

HP's results highlight how the PC and printing markets are bouncing along the bottom. On a conference call with analysts, Lores' message was that HP was controlling what it could. "We remain on track to deliver our cost savings targets," said Lores. 

Looking forward, Lores said that the PC market will get a boost from systems to train models locally. These low-latency systems will be "a significant driver of PC refresh" in 2024 and beyond. Lores added that HP will be outlining its innovation for more high-powered systems in the weeks to come. 

The company said personal systems revenue was $8.9 billion, down 11% from a year ago. Consumer revenue was down 12% and commercial sales fell 11%. HP delivered a personal systems operating profit of $592 million in the third quarter.

The printing business was also down from a year ago. HP said printing revenue was $4.3 billion, down 7% from a year ago. Consumer revenue was down 28% and commercial sales fell 6% from a year ago.

As for the outlook, HP projected non-GAAP earnings between 85 cents a share to 97 cents a share. For fiscal 2023, HP projected non-GAAP earnings to be $3.23 a share to $3.35 a share. Lores said HP expects pricing pressure and weak demand in China. "We view this moment as an opportunity to double down on the things we control," said Lores, who added HP will look for more cost savings. "We know how to manage the business strategically."

Here's a look at what HP outlined as growth areas.

Future of Work Data to Decisions Innovation & Product-led Growth New C-Suite Tech Optimization HP Chief Information Officer Chief Experience Officer

Google Cloud Next 2023: Perspectives for the CIO

Google Cloud Next 2023: Perspectives for the CIO

I'm currently at the front row of the analyst area at Moscone North for the opening Google Cloud Next '23 keynote. While our Larry Dignan has covered all the bases with the major announcements this week, with generative AI-infused into nearly everything and much more besides, I'll be looking at all the announcements with an eye for what it means for Chief Information Officers (CIOs). 

Today AI and cloud are the very top of the IT agenda. Like never before, enterprise data, intelligence, and massive compute applied in innovative ways will define the very future of our organizations. The announcements here at Google Cloud Next will tell the tale on whether Google Cloud is fully prepared to take their customers on that journey. Let's take a look at what they're saying.

If you wish, skip right to the CIO takeaways for Google Cloud Next '23.

The Google Cloud Next 2023 Keynotes: Blow-by-Bow

9:05am: Sundar Pichai arrives onstage talking about Thomas Kurian also coming on stage shortly and talking about digital transformation. Fast-forward four years, Google Cloud is one of the top enterprise companies in the world.

Sundar speaks about companies wanting a cutting-edge partner in cloud, and now a strategic partner with AI. He says Google has been working for 7 years to have an AI-first approach. Backdrop currently shows the PaLM-E language model, their enterprise LLM.

Sundar Pichai at Google Cloud Next 23

Pichar describes using generative AI to re-imaging the search experience, which they call Search Generative Experiences, or SG for short. You may even have had Google search ask you to try using SG lately. He says feedback from Google search users has been great so far.

Google has been for years to help deploy AI at scale. He knows that CIOs are on the hot seat to deliver gerneative AI today. Now cites how General Motors is using conversational AI in OnStar. How HCA Healthcare is using Google's medical domain LLM, Med-PaLM, to provide better care. United States Steel is using generative AI to summarize and extract information from repair manuals.

Sundar notes that these early use cases only scratch the surface. He lauds their Vertex AI initiative, then about their large inventory of different foundation models to provide rich choice in how companies use AI to get work done.

He announces that one million people are now using Duet AI in Google Workspace. Have been implementing rapid changes in the product from the outset and fast improvement using plenty of feedback.

Then Sundar announces that the general availability of Duet AI within Google Workplace is officially today, August 29th, 2023.

Now speaks about the important enterprise topic of Responsible AI, security and safety as well as their AI principles and best practices. Google is working hard to make sure users can more easily identify when generative AI content is being showed online, watermarked if needed, including invisible watermarks that don’t alter the content of images and video. Sundar says they are the first cloud provider to enable AI digital watermarking on images.

Bold and responsible is Sundar's message: “I truly believe we are embarking on golden age of innovation” building on his previous comment that AI is one of the biggest revolutions in our lifetimes.

Thomas Kurian at Google Cloud Next 23

9:17am: Now welcomes Thomas Kurian on stage. Kurian starts off, notably, thanking the many organizations working together with Google to bring Generative AI to market.

Says Google Cloud offers optimize environments for AI with Vertex AI. Everyone can innovate with AI he says. And every org can succeed in adopting AI.

Vertex AI and Duet AI are the big focus areas of what Thomas is speaking on. Cites a number of larger companies using Google Cloud like Yahoo! And Mahindra. 

Kurian cites the brand-new GKE Enterprise, which creates an new more integrated container platform, making it even easier for organizations to adopt best practices and principles that Google has learned from running their global Web services. GKE Enterprise brings powerful new team management features. Kurian says it's now simpler for platform admins to provision fleet resources for multiple teams. He makes no mention of Anthos, but it's part of the same story.

He then announces new availability of a very powerful new A3 GPU supercomputer instance powered by NVIDIA H100 GPUs, needed for today's largest AI workloads and for Google to stay in at the high-end of the generative AI game.

Then Kurian introduces the Titanium Tiered Offload Architecture. Offloads are dedicated hardware that perform essential behind-the-scenes security, networking, and storage functions that were previously performed by the server CPU. This allows the CPU to focus on maximizing performance for customer workloads. 

Then comes the 5th annnouncement, which is big private cloud news to support the on-prem cloud conversations that the industry has been having lately. The new solution is Google Distributed Cloud, for sovereign and regulatory needs (he doesn’t mention performance or control, but those are also notable use cases.)

Finaly 6th, he tips the hat on Google's new Cross Cloud Interconnect, for security and speed across “all clouds and SaaS” providers.

Now NVIDIA CEO, Jensen Huang, is up. He says they’re going to put their powerful AI supercomputer, DGX Cloud -- billed as an optimized multi-node AI training factory as-a-service -- right into Google Cloud.

My take: It's very smart to show off their very close relationship with NVIDIA. Kurian says they use NVIDIA GPU to build their next generation products. Kurian now asks Huang how NVIDIA is doing. Huang says “breakthrough, cutting-edge computer science. A whole new way of delivering computering. Reinventing software. Doing this with the highly-respected JAX and OPEN-XLA. To push the frontier of large language models. To save time, scale-up, save money, and save energy. All required by cutting-edge computing. He paints a compelling picture of an AI leader who knows where they are going and so it's key for Google Cloud to be seen as a strategic partner.

Huang announcies PAX-ML, building on top of JAX and Open XLA, a labor of love and groundbreaking for configuring and running machine learning experiments on top of JAX.

Says large teams at NVIDIA are building the next generation of processors and infrastructure. NVIDIA is working hard on the DGX-GH200, a massive system that can handle a trillion parameters, based on a revolutionary new "superchip" called Grace Hopper

Thomas Kurian Google Cloud Next 23 Platform

”Google is a platforms company at the heart of it. What to attracts all the devs who love NVIDIA products to create new products" says Kurian. Clear that he wants to make sure how it's clear how close NVIDIA is to Google Cloud and the special nature of the relationship to bring some of NVIDIA latest innovations to Google's cloud and AI stacks.

Now Kurian is talking a bout Vertex AI and Is seeing “very rapid growth”, listing a whole swath of companies building with Vertex AI. Interesting how  Vertex AI offers a robust and easy to select from AI “model garden”.  My take: Model choice is going to be a very big area for the large AI clouds to compete on and Google basically has the edge.

Then Kurian introduces the new PaLM 2 model, a major upgraded introduced in May, Has 3x the token input length, which is fast becoming another key metric that foundational models are fiercely competing on, as it determines the amount of input and output can be processed by the model as a whole, determining how complex a business or technical task can be handled. This is especially important in many of the highest impact scientific, engineering, and medical scenarios.

Kurian now touting that Google Cloud now supports over 100 AI foundation models, noting they are constantly adding many of the very latest new models, showing these off as a proof point that their AI model garden has one of the richest set of choices currently available.

He talks about about Google Cloud's strict control, protection, and security of enterprise data in VPC and private data for AI.

Now Nenshad Bardolliwalla, an old fellow Enterprise Irregular, is onstage talking about using 1st party models, along with Anthropic's new Claude 2 model and Meta’s LlaMA 2 right in the Vertex AI model garden.

9:40am: “Today if you go into the Vertex Model Garden that Llama 2 is available today”. Huge cheer from the audience.

32K is the token input limit for most leading edge AI models these days. About. 80 pages of text. Nenhad then inputs the entire Department of Motor Vehicles Handbook into a Llama 2 prompt. 

Then he asks Llama 2 to summarizer all the rules around pedestrians. Llama 2 does this in mere seconds.

Now Nenshad moves on to computer vision. Generates some images of trucks for the DMV web page. Now shows an example of using enterprise brand style and images to do style-tuning on the fly. Now images “will never go out of style.”

Shifts over to Google's new AI digital watermarking. There are real challenges with them, but Google things they've handled them sucesfuly in rich media, but not in text. Notes that pixel based watermarking often disturbs the image. Works right in Vertex AI. He shows watermarked images that have totally invisible watermarks.

Kurian back on stage data talking about search and their new Gounding service in Vertex AI to ensure factual results and reduce hallucinations. My take: Groundings are a very important advance in making generative AI ready for higher maturity enerprise use cases. Also has an Embeddings service. Vertex AI conversation also makes it easy to have threaded conversations inside business applications in multiple language.

Now Kurian explores the new Vertex AI Extensions to connect models, search, and third party applications. Nenshad gives the demo. Says can build a search and conversational app in two minutes. Goes into Vertex AI and goes to ‘Create a new app’’ Can specify advanced capabilities to activate enterprise features and ability to query model. Announces, Jira, Salesforce, and Confluence as sources for enterprise data for these apps, which holds the promise to transform IT and sales processes. Nenshad works on building a drivers license flow from the DMS. Can use simple text outlines to then build conversational apps in.just a few minutes. My take: This will iindeed accerate building apps on top of enterprise content sources along with very simple to create narrative journeys the app should support using that knowledge. Now offers citations so that data is grounded.

Nenshad now shows the app working. Says “this is the ability to bring the power of our powerful consumer-grade site directly to your enterprise apps.”

he demonstrates that Vertex Search and Conversation can build some amazing apps in surprisingly short time that have a lot of intelligence in them.

9:56am: Kurian emphasizes that Vertex AI can ensure that “AI is used in everything that you do.”

Duet AI and Google Workspace

Now they talking about the decade long journey in developing AI features of Google Workspace.

Over 300 new features for Google Workspace launches recently. They say the product investments are paying off. Claims more and more customers are switching to Google Workspace wholesale, with 10 million paid customers today.

Will enhance Duet Ai to go from conversational to contextual. Duet AI will soon look at whatever you are doing and proactively suggesting improvements. My take: Will have to be careful about interruptive, but says can even take action on your behalf, if you want, presumably without interrupting work.

Demonstrates building a polished creative brief using Duet AI with text and relevant graphics, along with charts from the correct Google Sheet. 

Shows a feature for people who join a Google meeting late benefit from the Duet AI “catch up” feature that will show late arrivals what happened in the meeting before they got there.

And Duet AI can even attending a meeting on your behalf, which Larry Dignan explored today on Constellation Insights. The Attend for Me features will be very interesting depending on how interactive it is.

Now directly addresses the AI elephant in the room, Kurian says "no data never leaves from users, departments, or organizations. Your data is your data. Google Cloud is uniquely positioned to ensure their models don’t learn from your data“ A very important strategic point, and data safety and sovereignty  will be a very important capability to guarantee for many organizations though likely hard to prove.

10:07am: Duet AI in Apigee can make it easy to design, create, and publish your APIs.

Duet AI can refactor code from legacy to modern languages. On screen is full of language database C++ code. They will migrate the legacy C++ code right to the Go language onstage. The result uses Cloud SQL. It indeed takes seconds to take this important database connector. Converted the database connection to a cloud managed database. Duet AI is training on Google Cloud specific products and best practices.

Duet AI can understand the structure and meaning of your enterprise data deeply as well, and not just generate code. It will also pull out specific functionality in a company’s code base, “without comprising quality.” Duet AI can modernize and migrate code in a highly contextually-aware way. Definitely a major shift in performance and likely quality for overhauling and modernizing legacy code bases, especially to become more cloud-ready, even cloud-native.

Kevin Mandia from Mandiant cames out to explore cybersecurity capabilities, including secure-by-design. Cites how Google Cloud’s security vulnerabilities rates significantly better than two other major hyperscalers. Again, critical for the CISO to sign-off on activating powerful generative AI capabilities From their cloud providers.

Key Google Cloud Next Takeaways for CIOs

So what are the key takeaways from the Google Cloud Next keynote for those in the CIO role:

  • Google has true enterprise-grade AI. Vertex AI and Duet AI should be regarded as leading generative AI capabilities, compete with the latest large token sizes to tackle non-trivial business use cases. Each offering is worthy of serious enteprise-wide consideration for both AI platform and app development as well as end-user AI enablement within Google Workspace and inside 3rd party business apps.
  • Google Cloud has a unique AI value proposition. Differentiation in Google Cloud’s generative AI capabilities for enterprises specifically lies in a) best-in-class foundation model choice with their model garden, including the very latest competitively significant models, b)  vital new “grounding” features to ensure AI results are factual and as accurate as possible, and c) versatility to securely run AI workloads in virtually all the ways that enterprises require, from on-prem to public multicloud, and D) sophisticated safety, privacy, and IP protection features, including born-cloud cybersecurity, strict Responsible AI compliance, and advanced digital watermarking features.
  • Enterprise data is fundamentally safe with AI in Google Cloud. Critical for enterprise usage, Vertex and Duet AI always sandbox enterprise data within an enterprise’s virtual private cloud within Google Cloud in a highly secure way. They promise that their models never train themselves permanently on enterprise data. Though they must, in the moment, analyze enterprise data to answer queries, but always in a private, temporary way. Organizations are still advised to trust but verify this, however.
  • Google Cloud has a very strong AI ecosystem play. Google Cloud can demonstrate many proof points that it is building one of the leading ecosystems of leading AI partners and technology providers, including those from leading foundation/ large language models and cutting-edge advanced AI hardware. The NVIDIA partnership is particularly important and will ensure Google Cloud can bring the latest and most powerful new AI technologies to bear long-term for their AI customers.
  • Google's AI offerings are designed for rapid, pervasive, and strategic value. With Vertex AI and Duet AI, Google Cloud provided strong evidence it is delivering a major step towards enabling organizations to quickly put AI everywhere in their organization wherever they need it. They can do it extraindariily easily and safely, particularly given the versatile and easy-to-use app generation capabilities demonstrated today.
  • Google Cloud's total AI offering is among best-in-class for the enterprise. In this analyst’s assessment, CIOs can be assured Google Cloud’s AI capabilities are current state-of-the-art in enterprise-grade AI. Google Cloud also demonstrated that they have a vision, plan, and many strategic partnerships to ensure they will remain so in the foreseeable future (which, however, may not be that far given the tech's fast-moving pace.)
  • There's safety in a leading enterprise vendor, and AI has real risks. While CIOs can theoretically achieve some performance and innovation advantages by also leveraging the intense innovation taking place today in the AI open source space, it would only be at significant risk. This is perhaps the most central Google Cloud value proposition: It’s usually  better to wait for Google Cloud to use its scale and expertise to apply its Responsible AI compliance as well as security and privacy reviews as it continues to seek to offer the most choice in model by figuring out how to incorporate the often less-safe public AI technologies into their Vertex AI fold.
Chief Information Officer

Google Cloud Next everything announced: Infusing generative AI everywhere

Google Cloud Next everything announced: Infusing generative AI everywhere

Google Cloud launched a series of updates, products and services designed to embed artificial intelligence and generative AI throughout its platform via Vertex AI, which is focused on builders, and Duet AI for front-end use cases.

The themes from Google Cloud at Google Cloud Next in San Francisco are use cases beyond IT, making it easier for developers to create with generative AI and large language models (LLMs) and driving usage throughout its services.

Google Cloud CEO Thomas Kurian said the game plan is to enable better framing of models, faster storage and infrastructure and tools to make AI more efficient and distributed all the way to the edge. Kurian added that it's critical to provide services that can address multiple use cases. Kurian also outlined customer wins and partnerships with GE Appliances, MSCI, SAP, Bayer, Culture AM, GM, HCA and others. 

During a keynote, Kurian cited customer wins and projects. A few include:

  • Yahoo is migrating 500 million mailboxes and 550PB of data to Google Cloud.
  • Mahindra used Google Cloud for a traffic surge when it sold 100,000 SUVs in 30 minutes during its online car buying launch. 
  • Fox Sports is using Google Cloud to find clips in natural language as well as its models. 

Google Workspace’s generative AI overhaul: Is ‘Attend for me’ the killer app?At Google I/O 2023, Google Cloud launches Duet AI, Vertex AI enhancements | Generative AI features starting to launch, next comes potential sticker shock

"There are a lot of solutions being deployed in different ways across industries," said Kurian, who added content creation is a use case, as is training models for specific tasks and automating multiple functions from back office and production to customer service.

Ray Wang, CEO of Constellation Research, said:

“Every enterprise board is asking their technology teams the same question, ‘When will we be taking advantage of Generative AI to create exponential gains or find massive operational efficiencies?’ Customers are looking for vendors that can deliver not just generative AI but overall, AI capabilities. When we talk to senior level executives, they are all trying to figure out if they will have enough data to get to a precision level that their stakeholders will trust. So far, Google has shown that they are taking a much more thoughtful approach from chip to apps on AI than some other competitors.”

Here's a look at everything Google Cloud outlined at Google Cloud Next.

Infrastructure

Google Cloud's big themes on infrastructure are platform-integrated AI assistance, optimizing workloads and building and running container-based applications. To that end, Google Cloud said Duet AI is now available across Cloud Console and IDEs. There's also code generation and chat assistance for developers, operations, security and data and low-code offerings.

The company also outlined the following for container-based applications:

  • Google Kubernetes Engine (GKE) Enterprise.
  • Cloud Run Multi-Container Support.
  • Cloud Tensor Processing Units (TPUs) for GKE.

Google Cloud also launched new versions of its TPUs (TPUv5e) and A3 supercomputer based on Nvidia H100 GPUs, purpose-built virtual machines and new storage products--Parallelstore, Cloud Storage FUSE. Those announcements are designed for customers looking for infrastructure built for AI deployments.

Cloud TPU v5e supports both medium-sale training and inference workloads. 

For traditional enterprises, Google outlined a series of new offerings--Titanium, Hyperdisk, Cross-Cloud networking and new integrated services on Google Distributed Cloud.

Databases

Google Cloud announced an AI-version of its AlloyDB database. With a series of launches, Google Cloud is looking to leverage its databases to make it easier for enterprises to run data where it is, provide a unified data foundation and create generative AI apps.

How Data Catalogs Will Benefit From and Accelerate Generative AI

The breakdown:

  • AlloyDB AI, which will support vector search, in-database embedding, full integration with Vertex AI and open source generative AI tools.
  • AlloyDB Omni, which is in preview. AlloyDB Omni is a downloadable edition of AlloyDB that can run on multiple clouds such as Google Cloud, AWS and Azure, on-premises and on a laptop. Omni delivers 2x faster transactional and up to 100x faster analytics queries compared to standard PostgreSQL, which is becoming more popular in the enterprise.
  • Duet AI in databases to provide assistive database management and automation for migrations.
  • Spanner Data Boost, which will offer workload isolated processing of operational data without impacting production systems.
  • Memorystore for Redis Cluster, an open-source compatible scale-out database.

AI

Google Cloud made moves to create an integrated portfolio of open foundational models and tuning options. AWS hit similar themes lately as cloud giants see the ability to curate and offer foundational models as table stakes for enterprises.

Google Cloud outlined the following:

  • Foundation model improvements and expanded tuning for PaLM (text and chat), Imagine, Codey and Text Embeddings.
  • Meta's Llama 2 and Anthropic's Claude 2 will be available in the Vertex AI Model Garden. Google Cloud has more than 100 models in its Vertex AI Model Garden. Meta's Llama 2 and what that means for GenerativeAI
  • Med-PaLM is now available for healthcare LLM use cases. 
  • Grounding for PaLM API and Vertex AI Search. Grounding was a key theme for Google Cloud executives because enterprises need high quality output when they layer in their data for specific use cases.
  • Vertex AI Search and Conversation general availability, which includes major updates for generative search, image search and prompting with LLMs.
  • Collab Enterprise on Vertex AI, an enterprise focused notebook experience with collaboration tools.

Analytics

Google Cloud outlined a series of data analytics tools that aim to enable enterprises to interconnect data, bring AI to your data and boost productivity. The themes from Google Cloud rhyme with industry developments from Databricks, a partner, along with MongoDB, Salesforce and a bevy of others.

Databricks Data + AI Summit: LakehouseIQ, Lakehouse AI and everything announced | MongoDB launches Atlas Vector Search, Atlas Stream Processing to enable AI, LLM workloads

Key items include:

  • Open Lakehouse, an AI data platform that aims to work across all data formats and adding Hudi and Delta as well as fully managed Iceberg tables. There will also be cross-cloud joints and views in BigQuery Omni and one dashboard in Dataplex to data and AI artefacts.
  • BigQuery ML, which will bring generative AI to enterprise data by using Vertex Foundation models directly on data in BigQuery. BigQuery ML inference engine will run predictions and Vertex Model and imports from TensorFlow, XGBoost and ONNX.
  • BQ Embeddings and vector indexes including support for embeddings and vector indexes in BigQuery and synchronization with Vertex Features Store.
  • BigQuery Studio, which will get a unified interface for data engineering, analytics and machine learning workloads.
  • Duet AI in Looker and BigQuery for analysis, code generation and data workload optimization.

Security Cloud

Google Cloud moved to add Duet AI throughout its security offerings. The breakdown includes:

  • Duet AI in Mandiant Threat Intelligence, which will use generative AI to improve threat assessments and create threat actor profiles.
  • Duet AI in Chronicle Security Operations, which adds expertise to users.
  • Duet AI in Security Command Center to bolster risk assessments and recommend remediation.
  • Mandiant Hunt for Chronicle to combine front line intelligence with data.
  • Platform security advancements for detection, network security and digital sovereignty.

Workspace

The big theme here was making Google Workspace AI-first and embedding Duet AI features throughout the platform.

Among the key items:

  • Duet AI add on available Sept. 29. This new SKU will be available on a trial basis with pricing to be detailed later.
  • Duet AI side panel, which will provide generative AI collaboration tools across the Workspace apps.
  • Google Meet with Duet AI to improve visuals, sound and lighting as well as meeting management tools.
  • Duet AI in Chat to provide updates and suggestions across Workspace apps.
  • Zero trust and digital sovereignty controls automatically classify and label data for compliance and encryption.

Constellation Research’s take

Constellation Research analyst Doug Henschen said:

“I’m mostly eager to see the demos and previews move into early trials and general availability. I’m sure early adopters will find out what works and what doesn’t, and they make some unexpected discoveries about gen AI that vendors didn’t foresee. Gen AI certainly has the potential to change analytics and BI as we know it very quickly, but it's time for reality to catch up with the promises.

Duet AI for both BigQuery and Looker is a potential game changer as it promises to make things easier for analysts and business users alike with natural-language-to-SQL generation, auto recommendations based on query context, and chat interactions with your data. Google execs say they are 'radically rebuilding Looker' with capabilities such as auto-generated slide presentations potentially replacing dashboards and promising a 'massive change in how Looker is used, and by whom.' I have yet to see generally available products, but there’s a palpable sense that the gen AI capabilities promised by Google, Microsoft and others may finally make analytics and BI broadly accessible and understandable to business users.

Openness and gen AI advances are the two big themes on the analytics front. To improve openness to third-party sources and clouds, Google BigQuery now supports Delta, Hudi and Iceberg table formats while Big Query Omni is gaining cross-cloud joins and materialized views. On gen AI -- beyond the addition of Duet AI to both BigQuery and Looker -- Google is integrating Big Query with Vertex AI, via a new BigQuery Studio interface, so there’s a single experience for data analysts, data scientists and data engineers. The integration between BigQuery and Vertex AI will also expose Vertex Foundation models directly to data in BigQuery for custom model training. Finally, Google is bringing Vertex AI into the Dataplex data catalog to provide unified access and metadata management over all data, models and related assets. This promises to improve data and model access and governance for all constituents and should help to accelerate the development of gen AI capabilities.

Microsoft partnered with Open AI to accelerate what it could do with AI, but in doing so it picked a fight with a formidable competitor in Google. Google initially had to react to Microsoft’s announcements earlier this year, but the company had a deep well of AI assets and expertise to draw on and I still see it as the leader among all three clouds in the depth and breadth of its AI capabilities, now including gen AI.”

Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Google Google Cloud SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Google Workspace’s generative AI overhaul: Is ‘Attend for me’ the killer app?

Google Workspace’s generative AI overhaul: Is ‘Attend for me’ the killer app?

Google previewed a Google Meet feature called "Attend for me" that could turn out to be a killer app for those of us slammed with meetings and begging for a digital stand-in. The feature could also entice enterprises to consider Google Meet and Workspace vs. the likes of Microsoft Teams, Zoom, Cisco's WebEx and a bevy of other collaboration apps.

Attend for me is a feature that leverages the combination of Duet AI and Google Workspace to send a digital representative to a meeting. Your digital version will attend the meeting, take notes and give you a recap with video snippets all via generative AI. Simply put, Google Workspace will attend a meeting, so you don't have to.

Google Cloud Next everything announced: Infusing generative AI everywhere | At Google I/O 2023, Google Cloud launches Duet AI, Vertex AI enhancements | Generative AI features starting to launch, next comes potential sticker shock

Outlined at Google Cloud Next in San Francisco, Attend for me was part of a Google Workspace breakout. Google Workspace, which 3 billion active users and 10 million paying subscribers, will give customers a bevy of AI-first productivity tools via Duet AI including:

  • A new Duet AI add on available Sept. 29. Customers can sign up for the Duet AI add-on for a no-cost trial to use generative AI across Workspace. Google Cloud didn't disclose what this Duet AI add-on would cost, but it's a safe bet that it's in the ballpark of $30 per user per month--the going rate for generative AI add-ons.
  • Duet AI side panel, a generative AI sidekick across Workspace apps to help you write, organize, visualize and connect.
  • Google Meet features Studio, which will transform low quality images into studio worth images and improve audio, video and lighting, real-time teleprompting and automatic translated captions.
  • Duet AI in Google Chat, which will give you proactive suggestions and UI shortcuts across Workspace.
  • Classification and labeling in Drive to prevent data loss and roll out data sovereignty and client-side encryption controls.

Kristina Behr, Vice President of Product Management at Google Workspace, said the Duet AI integration is about giving customers "a coach and a source of information across Workspace."

Behr acknowledged that Google Meet wasn't quite ready for the big pandemic rush to remote work, but has closed feature gaps such as attendance tracking, breakout rooms, full HD, noise cancellation and insights for Meet usage.

Closing feature gaps is one thing but Attend for me could be a reason to consider Google Meet. A demo during the Google Cloud Next keynote highlighted how Duet AI can create presentations and creative briefs on the fly based on documents in Google Drive.

Details beyond the demo are scarce about Attend for me as a feature. Google executives haven't taken the feature for a spin internally at scale. However, it's clear that Attend for me can fill a need. After all, Attend for me is a lot cheaper than cloning yourself to attend stacked meetings day after day.

Attend for me is also likely to spur corporate culture conversations. Here are thoughts on how Attend for me will play out.

  • Employees who are burdened with too many meetings will likely flock to a feature like Attend for me (and Google Workspace by extension). Attend for me goes beyond generative AI-powered note taking and summarization and gets you out of meetings.
  • Enterprise administrators will have to determine whether to enable Attend for me functionality. Why wouldn't companies go for it? Manager backlash. It's clear that you'll show up in person for a C-suite meeting or team leadership catchup. But what about all those meetings that middle managers form?
  • There will be a pecking order that could cause some corporate strife. What happens when someone calls a meeting of 10 people and 9 of them opt for Attend for me?
  • Attend for me could be a novel feature that becomes the norm quickly. For instance, Zoom is using generative AI to provide real-time summaries of what had been discussed in a meeting. That’s not too far from sending a digital representative. Zoom sees enterprise traction, cites Contact Center, Team Chat wins

The presence of Attend for me as a feature may force people to think through whether a meeting is really required. That outcome alone may be enough return on investment to justify adding Duet AI. You'd pay $30 a month for less meetings, wouldn't you?

Future of Work Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization New C-Suite Marketing Transformation Next-Generation Customer Experience Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

OpenAI launches ChatGPT Enterprise

OpenAI launches ChatGPT Enterprise

OpenAI is launching an enterprise version of ChatGPT in a move that competes with partner Microsoft.

In a blog post, OpenAI said it will launch ChatGPT Enterprise with the aim of signing up as many corporate customers as possible. ChatGPT Enterprise will provide faster processing and security and privacy controls for corporate data.

The move is notable given Microsoft has laid out plans to commercialize OpenAI's ChatGPT. OpenAI has plans to be an ingredient brand while growing out its own customer base. Salesforce is another example of a company that has built out similar services and added GPT to its branding mix.

Pricing for ChatGPT wasn't disclosed. OpenAI has a contact sales button without detail. OpenAI argues that it’s seen ChatGPT adopted in more than 80% of the Fortune 500 companies. These early users would be under individual ChatGPT Plus plans, which run you $20 a month, or free plans.

To woo enterprises, OpenAI made the following points about ChatGPT Enterprise:

  • OpenAI won't train models on your business data, conversations or usage.
  • ChatGPT Enterprise is SOC 2 compliant, and conversations are encrypted.
  • The service provides a new admin console to manage users.
  • ChatGPT Enterprise includes unlimited usages of GPT-4, 32k context, processing of larger inputs and files and advanced data analysis.
  • Custom ChatGPT instances are available with templates for chat and workflows. Pricing will also include free credits to use OpenAI's APIs.

OpenAI added that it has more features on the roadmap including customization tools with company data, a self-serve ChatGPT Business offering for small teams, power tools and functions for roles within a company.

Why enterprises will want Nvidia competition soon

Why enterprises will want Nvidia competition soon

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly.

What's good for Nvidia shareholders may give enterprise technology buyers pause. Nvidia needs competition.

Following Nvidia's blowout second quarter results, surging margins and crazy demand it's clear that the company has little competition and a lot of pricing power. Nvidia's second quarter gross margin was 70.1%, up 43.5% from a year ago. Third quarter gross margins will creep up to 71.5% to 72.5%.

The competition won't begin to show up until the fourth quarter when AMD launches its AMD Instinct MI300A and MI300X GPUs and AWS re:Invent likely features new Trainium and Inferentia chips.

Sure, there are a few enterprises buying Nvidia systems to train models on-premises, but most will buy computing power through the cloud. Either way, Nvidia is getting paid enough that it will double its fourth quarter revenue in two quarters, assuming it hits its third quarter sales guidance of $16 billion.

Here are the moving parts enterprises need to consider as they explore and scale generative AI efforts.

  • Model training isn't cheap and requires GPUs. Nvidia has pricing power on the infrastructure side.
  • Today, that pricing power is absorbed because hyperscalers are building out and consumer Internet and enterprises see productivity returns. More than 50% of Nvidia’s data center revenue in the second quarter derived from cloud service providers.
  • First movers will pay up for competitive advantage.
  • Nvidia has worked on its supply chain network for a decade or more and now appears to be able to meet demand.
  • Enterprises--especially those building their own infrastructure--will need to determine whether you go with Nvidia or consider options that'll be available in a few months.

To hear Nvidia CEO Jensen Huang tell it, there's only one choice. Not surprisingly, that choice is Nvidia. "The world has something along the lines of about $1 trillion worth of data centers installed, in the cloud, in enterprise and otherwise. And that $1 trillion of data centers is in the process of transitioning into accelerated computing and generative AI. We're seeing two simultaneous platform shifts at the same time," said Jensen.

Jensen continued:

"What makes NVIDIA special are: one, architecture. NVIDIA accelerates everything from data processing, training, inference, every AI model, real-time speech to computer vision, and giant recommenders to vector databases. The performance and versatility of our architecture translates to the lowest data center TCO and best energy efficiency."

It's hard to argue with Jensen now.

Going forward, the question will be whether everyone will need to train models with Nvidia or will something else do. Amazon CEO Andy Jassy will play the Nvidia game, but also noted there will be workloads for its own processors.

"Customers are excited by Amazon EC2 P5 instances powered by NVIDIA H100 GPUs to train large models and develop generative AI applications. However, to date, there's only been one viable option in the market for everybody and supply has been scarce,” said Jassy on Amazon's earnings conference call. “We're optimistic that a lot of large language model training and inference will be run on AWS' Trainium and Inferentia chips in the future."

You can also expect AMD to get training workloads too. AMD's value to the industry in x86 processors was being a counterweight to Intel. It will play the same role in GPUs and be formidable. AMD makes its case for generative AI workloads vs. Nvidia

AMD Instinct MI300A and MI300X GPUs are sampling to HPC, Cloud and AI customers with production in the fourth quarter. "I think there will be multiple winners. And we will be first to say that there are multiple winners," said AMD CEO Lisa Su.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Workday: Strong Q2, plans generative AI launches at Workday Rising

Workday: Strong Q2, plans generative AI launches at Workday Rising

Workday raised its fiscal 2024 subscription revenue outlook as the company said it has more than 65 million users under contract.The company also said it will preview generative AI tools at Workday Rising.

The HCM and financials software company reported fiscal second quarter net income of 30 cents a share on revenue of $1.79 billion, up 16.3% from a year ago. Non-GAAP earnings were $1.43 a share in the second quarter.

Wall Street was expecting Workday to report second quarter non-GAAP earnings of $1.26 a share on revenue of $1.59 billion.

Workday's operating cash flow in the second quarter was $425.3 million, up from $114.4 million a year ago. 

Key points:

  • Workday said it has more than 5,000 core Workday Financial Management and Workday HCM customers.
  • The company saw retail and hospitality generating $1 billion in annual recurring revenue to join financial services.
  • Workday ended the quarter with cash, cash equivalents and marketable securities of $6.66 billion.

As for the outlook, Workday raised its fiscal 2024 subscription revenue guidance to $6.57 billion to $6.59 billion, up about 18% from the previous year. Third quarter subscription revenue will be about $1.68 billion. The company added that it is raising its non-GAAP operating margin target to 23.5%.

On a conference call, Carl Eschenbach, co-CEO, Workday, said the company is seeing deal scrutiny and has been focused on building out its management bench. Workday appointed Emma Chalwin, a former Salesforce executive, to chief marketing officer and last quarter hired Zane Rowe, formerly of VMware, as CFO.

Eschenbach also said that Workday has been focusing adding Financials customers and expanding its footprint. The US represents 75% of Workday's revenue. The company has been honing its sales, partner and go-to-market ground game. 

Workday's Aneel Bhusri, co-founder, co-CEO, said it has more than 3,000 customers sharing data with its machine learning models. Bhusri added that Workday will outline generative AI developments next month at Workday Rising.

Bhusri said Workday will preview copilot use cases as well as content generation and document understanding. "We believe that the enhanced AI and generative AI will enhance our win rates," he said. 

The co-CEO added that Workday plans to "offer generous usage based entitlements" for customers that opt-in to generative AI functionality. That approach could resonate with enterprise buyers, who are about to get hit with a bevy of generative AI upsells and add-ons

Bhusri said Workday isn't necessarily looking to charge for generative AI add-ons. Why? Because customers are sharing anonymized data in return for insights. Workday can use that data to train models. 

"The data is valuable to train LLMs and domain specific LLMs. We turn around and make our products more competitive," said Bhusri, who added that Workday is likely to create new products based on models. 

Constellation Research CEO Ray Wang said Workday's approach to data and generative AI makes sense. 

"Customers know that their ERP and HR systems securely store a lot of the training data that will power future AI innovations. It's good to see that Workday recognizes that this is the customer's data being used to create derivative insights and allow customers to remain loyal to Workday."

    Future of Work Data to Decisions Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity workday ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief People Officer Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

    Palo Alto Networks: Takeaways from a Friday afternoon treatise

    Palo Alto Networks: Takeaways from a Friday afternoon treatise

    Palo Alto Networks held a fourth quarter earning conference call on a Friday after market close and wound up outlining a three- to five-year vision. Give Palo Alto Networks CEO Nikesh Arora props for theatrics and pep.

    The earnings turned out swell--especially when you consider the worst when a company delivers news on a Friday afternoon. On Aug. 18th shortly after 4:30 pm EDT, Arora kicked the call off.

    "We apologize to people who are inconvenienced but as we had mentioned in our press release, we wanted to give ample time to analysts to have one-on-one calls with us over the weekend, and we have a sales conference that kicks off on Sunday. We want to make sure all our information was disclosed out there. So again, we apologize for the unique Friday afternoon earnings call. But clearly, we have enjoyed the attention."

    We'll skip the actual fourth quarter results, but they were better than expected. Instead, it's worth focusing on a bevy of big picture items that made tuning in past Friday Happy Hour worth it. Here's a look at the big picture.

    Cybersecurity vendors are under scrutiny. For years, security companies have had blank checks from enterprises. Who wants to be seen crimping on security? Higher interest rates have changed the dynamics.

    "CFOs are scrutinizing deals, which means you have to be better prepared to answer their question and show the business value that you bring to them with your cybersecurity products," said Arora.

    And that business value is what exactly? For Palo Alto Networks it's having a platform approach that can consolidate vendors, contracts, licenses and maintenance. Arora said:

    "We can usually walk in and say, here, you can consolidate the following five, it doesn't cost you anymore, but you get a better outcome, and you get a modernized security infrastructure. So from that perspective, that strategy of ours is resonating. But there is more scrutiny. There are deals that go through multiple levels. There are some that get pushed. There are some that get canceled."

    Cybersecurity total addressable market expands. Arora argued that commerce, digital transformation, cloud computing, IoT, AI and every new advance cooked up only creates more threat vectors. Securing these advances is going to require integration. "We at Palo Alto Networks as well as, to some degree, different plays in the industry, started to look at the various parts of these markets and say, like, these things need to start getting integrated because you can't deliver great security outcomes without these things getting integrated," he said.

    Platforms matter. Palo Alto Networks isn't alone with its platform approach to security. Across the large technology vendors, it's all about platform. Platforms win because customers don't have time to integrate everything. Arora said:

    "It seems obvious now, but five years ago, we had customers who had more cybersecurity vendors than they had IT vendors. And it was a customer's responsibility to take these vendors, deploy them across their infrastructure, make them work together to deliver security outcomes."

    Bleeding edge because there's no choice. Arora said "you have to stay at the bleeding edge because you don't need your customers to be at the bleeding edge." "It is our responsibility as a security company to make sure we take all the innovation, we distill it, we make it work in an integrated fashion and deliver it to our customers at the fastest pace possible because the bad actors are not waiting," he added.

    Evergreen innovation. This concept from Palo Alto Networks is worth cribbing for the enterprise technology supply chain. "We want to maintain this notion of being an evergreen innovation company," said Arora. "You always have to be scanning the market understanding, where the world is going, where technology is going to see what potential security risks are going to get created in the adoption of that technology, in the deployment of that technology to make sure we're ahead of the curve, and we start delivering security by design."

    AI can't be wrong in security. Palo Alto Networks will focus on "precision AI" that can't be wrong. "We have to build a lot of our own models. We have to train them. We have to collect first-party data. We have to understand the data. Today, we collect approximately 5 petabytes of data. Yes, 5 petabytes of data on behalf of our customers and analyze it for them to make sure we can separate signal from noise and take that signal and go create security outcomes for our customers," said Arora.

    The takeaway: Palo Alto Networks plans to embed precision AI throughout its entire product line whether that's copilots, UI enhancements or insights.

    Digital Safety, Privacy & Cybersecurity Security Zero Trust Chief Information Officer Chief Privacy Officer Chief Information Security Officer

    Enterprise Tech News, 2023 ShortList Recap | ConstellationTV Episode 64

    Enterprise Tech News, 2023 ShortList Recap | ConstellationTV Episode 64

    On ConstellationTV episode 64, co-hosts analyst Liz Miller and Holger Mueller talk #tech news trends, then both Liz and Holger relay the releases of their new and updated Q3 ShortLists naming the leading vendor solutions in a wide range of coverage areas.

    00:00 - Introduction
    01:28 - Tech News Updates - cloud spending, AI trends and more
    13:03 - ShortList Updates from Liz Miller
    18:11 - Shortlist Updates from Holger Mueller
    24:37 - Bloopers
     

    ConstellationTV is a bi-weekly Web series hosted by Constellation analysts. The show airs live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday.

    Subscribe to our YouTube Channel: https://youtube.com/@UCs0vwq63PfnDZp1fMO0uDJg

    On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/8xquU-0zbv0?si=nL0CkofeJ-TUuxHQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>