Results

Google Cloud Next everything announced: Infusing generative AI everywhere

Google Cloud launched a series of updates, products and services designed to embed artificial intelligence and generative AI throughout its platform via Vertex AI, which is focused on builders, and Duet AI for front-end use cases.

The themes from Google Cloud at Google Cloud Next in San Francisco are use cases beyond IT, making it easier for developers to create with generative AI and large language models (LLMs) and driving usage throughout its services.

Google Cloud CEO Thomas Kurian said the game plan is to enable better framing of models, faster storage and infrastructure and tools to make AI more efficient and distributed all the way to the edge. Kurian added that it's critical to provide services that can address multiple use cases. Kurian also outlined customer wins and partnerships with GE Appliances, MSCI, SAP, Bayer, Culture AM, GM, HCA and others. 

During a keynote, Kurian cited customer wins and projects. A few include:

  • Yahoo is migrating 500 million mailboxes and 550PB of data to Google Cloud.
  • Mahindra used Google Cloud for a traffic surge when it sold 100,000 SUVs in 30 minutes during its online car buying launch. 
  • Fox Sports is using Google Cloud to find clips in natural language as well as its models. 

Google Workspace’s generative AI overhaul: Is ‘Attend for me’ the killer app?At Google I/O 2023, Google Cloud launches Duet AI, Vertex AI enhancements | Generative AI features starting to launch, next comes potential sticker shock

"There are a lot of solutions being deployed in different ways across industries," said Kurian, who added content creation is a use case, as is training models for specific tasks and automating multiple functions from back office and production to customer service.

Ray Wang, CEO of Constellation Research, said:

“Every enterprise board is asking their technology teams the same question, ‘When will we be taking advantage of Generative AI to create exponential gains or find massive operational efficiencies?’ Customers are looking for vendors that can deliver not just generative AI but overall, AI capabilities. When we talk to senior level executives, they are all trying to figure out if they will have enough data to get to a precision level that their stakeholders will trust. So far, Google has shown that they are taking a much more thoughtful approach from chip to apps on AI than some other competitors.”

Here's a look at everything Google Cloud outlined at Google Cloud Next.

Infrastructure

Google Cloud's big themes on infrastructure are platform-integrated AI assistance, optimizing workloads and building and running container-based applications. To that end, Google Cloud said Duet AI is now available across Cloud Console and IDEs. There's also code generation and chat assistance for developers, operations, security and data and low-code offerings.

The company also outlined the following for container-based applications:

  • Google Kubernetes Engine (GKE) Enterprise.
  • Cloud Run Multi-Container Support.
  • Cloud Tensor Processing Units (TPUs) for GKE.

Google Cloud also launched new versions of its TPUs (TPUv5e) and A3 supercomputer based on Nvidia H100 GPUs, purpose-built virtual machines and new storage products--Parallelstore, Cloud Storage FUSE. Those announcements are designed for customers looking for infrastructure built for AI deployments.

Cloud TPU v5e supports both medium-sale training and inference workloads. 

For traditional enterprises, Google outlined a series of new offerings--Titanium, Hyperdisk, Cross-Cloud networking and new integrated services on Google Distributed Cloud.

Databases

Google Cloud announced an AI-version of its AlloyDB database. With a series of launches, Google Cloud is looking to leverage its databases to make it easier for enterprises to run data where it is, provide a unified data foundation and create generative AI apps.

How Data Catalogs Will Benefit From and Accelerate Generative AI

The breakdown:

  • AlloyDB AI, which will support vector search, in-database embedding, full integration with Vertex AI and open source generative AI tools.
  • AlloyDB Omni, which is in preview. AlloyDB Omni is a downloadable edition of AlloyDB that can run on multiple clouds such as Google Cloud, AWS and Azure, on-premises and on a laptop. Omni delivers 2x faster transactional and up to 100x faster analytics queries compared to standard PostgreSQL, which is becoming more popular in the enterprise.
  • Duet AI in databases to provide assistive database management and automation for migrations.
  • Spanner Data Boost, which will offer workload isolated processing of operational data without impacting production systems.
  • Memorystore for Redis Cluster, an open-source compatible scale-out database.

AI

Google Cloud made moves to create an integrated portfolio of open foundational models and tuning options. AWS hit similar themes lately as cloud giants see the ability to curate and offer foundational models as table stakes for enterprises.

Google Cloud outlined the following:

  • Foundation model improvements and expanded tuning for PaLM (text and chat), Imagine, Codey and Text Embeddings.
  • Meta's Llama 2 and Anthropic's Claude 2 will be available in the Vertex AI Model Garden. Google Cloud has more than 100 models in its Vertex AI Model Garden. Meta's Llama 2 and what that means for GenerativeAI
  • Med-PaLM is now available for healthcare LLM use cases. 
  • Grounding for PaLM API and Vertex AI Search. Grounding was a key theme for Google Cloud executives because enterprises need high quality output when they layer in their data for specific use cases.
  • Vertex AI Search and Conversation general availability, which includes major updates for generative search, image search and prompting with LLMs.
  • Collab Enterprise on Vertex AI, an enterprise focused notebook experience with collaboration tools.

Analytics

Google Cloud outlined a series of data analytics tools that aim to enable enterprises to interconnect data, bring AI to your data and boost productivity. The themes from Google Cloud rhyme with industry developments from Databricks, a partner, along with MongoDB, Salesforce and a bevy of others.

Databricks Data + AI Summit: LakehouseIQ, Lakehouse AI and everything announced | MongoDB launches Atlas Vector Search, Atlas Stream Processing to enable AI, LLM workloads

Key items include:

  • Open Lakehouse, an AI data platform that aims to work across all data formats and adding Hudi and Delta as well as fully managed Iceberg tables. There will also be cross-cloud joints and views in BigQuery Omni and one dashboard in Dataplex to data and AI artefacts.
  • BigQuery ML, which will bring generative AI to enterprise data by using Vertex Foundation models directly on data in BigQuery. BigQuery ML inference engine will run predictions and Vertex Model and imports from TensorFlow, XGBoost and ONNX.
  • BQ Embeddings and vector indexes including support for embeddings and vector indexes in BigQuery and synchronization with Vertex Features Store.
  • BigQuery Studio, which will get a unified interface for data engineering, analytics and machine learning workloads.
  • Duet AI in Looker and BigQuery for analysis, code generation and data workload optimization.

Security Cloud

Google Cloud moved to add Duet AI throughout its security offerings. The breakdown includes:

  • Duet AI in Mandiant Threat Intelligence, which will use generative AI to improve threat assessments and create threat actor profiles.
  • Duet AI in Chronicle Security Operations, which adds expertise to users.
  • Duet AI in Security Command Center to bolster risk assessments and recommend remediation.
  • Mandiant Hunt for Chronicle to combine front line intelligence with data.
  • Platform security advancements for detection, network security and digital sovereignty.

Workspace

The big theme here was making Google Workspace AI-first and embedding Duet AI features throughout the platform.

Among the key items:

  • Duet AI add on available Sept. 29. This new SKU will be available on a trial basis with pricing to be detailed later.
  • Duet AI side panel, which will provide generative AI collaboration tools across the Workspace apps.
  • Google Meet with Duet AI to improve visuals, sound and lighting as well as meeting management tools.
  • Duet AI in Chat to provide updates and suggestions across Workspace apps.
  • Zero trust and digital sovereignty controls automatically classify and label data for compliance and encryption.

Constellation Research’s take

Constellation Research analyst Doug Henschen said:

“I’m mostly eager to see the demos and previews move into early trials and general availability. I’m sure early adopters will find out what works and what doesn’t, and they make some unexpected discoveries about gen AI that vendors didn’t foresee. Gen AI certainly has the potential to change analytics and BI as we know it very quickly, but it's time for reality to catch up with the promises.

Duet AI for both BigQuery and Looker is a potential game changer as it promises to make things easier for analysts and business users alike with natural-language-to-SQL generation, auto recommendations based on query context, and chat interactions with your data. Google execs say they are 'radically rebuilding Looker' with capabilities such as auto-generated slide presentations potentially replacing dashboards and promising a 'massive change in how Looker is used, and by whom.' I have yet to see generally available products, but there’s a palpable sense that the gen AI capabilities promised by Google, Microsoft and others may finally make analytics and BI broadly accessible and understandable to business users.

Openness and gen AI advances are the two big themes on the analytics front. To improve openness to third-party sources and clouds, Google BigQuery now supports Delta, Hudi and Iceberg table formats while Big Query Omni is gaining cross-cloud joins and materialized views. On gen AI -- beyond the addition of Duet AI to both BigQuery and Looker -- Google is integrating Big Query with Vertex AI, via a new BigQuery Studio interface, so there’s a single experience for data analysts, data scientists and data engineers. The integration between BigQuery and Vertex AI will also expose Vertex Foundation models directly to data in BigQuery for custom model training. Finally, Google is bringing Vertex AI into the Dataplex data catalog to provide unified access and metadata management over all data, models and related assets. This promises to improve data and model access and governance for all constituents and should help to accelerate the development of gen AI capabilities.

Microsoft partnered with Open AI to accelerate what it could do with AI, but in doing so it picked a fight with a formidable competitor in Google. Google initially had to react to Microsoft’s announcements earlier this year, but the company had a deep well of AI assets and expertise to draw on and I still see it as the leader among all three clouds in the depth and breadth of its AI capabilities, now including gen AI.”

Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Google Google Cloud SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Google Workspace’s generative AI overhaul: Is ‘Attend for me’ the killer app?

Google previewed a Google Meet feature called "Attend for me" that could turn out to be a killer app for those of us slammed with meetings and begging for a digital stand-in. The feature could also entice enterprises to consider Google Meet and Workspace vs. the likes of Microsoft Teams, Zoom, Cisco's WebEx and a bevy of other collaboration apps.

Attend for me is a feature that leverages the combination of Duet AI and Google Workspace to send a digital representative to a meeting. Your digital version will attend the meeting, take notes and give you a recap with video snippets all via generative AI. Simply put, Google Workspace will attend a meeting, so you don't have to.

Google Cloud Next everything announced: Infusing generative AI everywhere | At Google I/O 2023, Google Cloud launches Duet AI, Vertex AI enhancements | Generative AI features starting to launch, next comes potential sticker shock

Outlined at Google Cloud Next in San Francisco, Attend for me was part of a Google Workspace breakout. Google Workspace, which 3 billion active users and 10 million paying subscribers, will give customers a bevy of AI-first productivity tools via Duet AI including:

  • A new Duet AI add on available Sept. 29. Customers can sign up for the Duet AI add-on for a no-cost trial to use generative AI across Workspace. Google Cloud didn't disclose what this Duet AI add-on would cost, but it's a safe bet that it's in the ballpark of $30 per user per month--the going rate for generative AI add-ons.
  • Duet AI side panel, a generative AI sidekick across Workspace apps to help you write, organize, visualize and connect.
  • Google Meet features Studio, which will transform low quality images into studio worth images and improve audio, video and lighting, real-time teleprompting and automatic translated captions.
  • Duet AI in Google Chat, which will give you proactive suggestions and UI shortcuts across Workspace.
  • Classification and labeling in Drive to prevent data loss and roll out data sovereignty and client-side encryption controls.

Kristina Behr, Vice President of Product Management at Google Workspace, said the Duet AI integration is about giving customers "a coach and a source of information across Workspace."

Behr acknowledged that Google Meet wasn't quite ready for the big pandemic rush to remote work, but has closed feature gaps such as attendance tracking, breakout rooms, full HD, noise cancellation and insights for Meet usage.

Closing feature gaps is one thing but Attend for me could be a reason to consider Google Meet. A demo during the Google Cloud Next keynote highlighted how Duet AI can create presentations and creative briefs on the fly based on documents in Google Drive.

Details beyond the demo are scarce about Attend for me as a feature. Google executives haven't taken the feature for a spin internally at scale. However, it's clear that Attend for me can fill a need. After all, Attend for me is a lot cheaper than cloning yourself to attend stacked meetings day after day.

Attend for me is also likely to spur corporate culture conversations. Here are thoughts on how Attend for me will play out.

  • Employees who are burdened with too many meetings will likely flock to a feature like Attend for me (and Google Workspace by extension). Attend for me goes beyond generative AI-powered note taking and summarization and gets you out of meetings.
  • Enterprise administrators will have to determine whether to enable Attend for me functionality. Why wouldn't companies go for it? Manager backlash. It's clear that you'll show up in person for a C-suite meeting or team leadership catchup. But what about all those meetings that middle managers form?
  • There will be a pecking order that could cause some corporate strife. What happens when someone calls a meeting of 10 people and 9 of them opt for Attend for me?
  • Attend for me could be a novel feature that becomes the norm quickly. For instance, Zoom is using generative AI to provide real-time summaries of what had been discussed in a meeting. That’s not too far from sending a digital representative. Zoom sees enterprise traction, cites Contact Center, Team Chat wins

The presence of Attend for me as a feature may force people to think through whether a meeting is really required. That outcome alone may be enough return on investment to justify adding Duet AI. You'd pay $30 a month for less meetings, wouldn't you?

Future of Work Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization New C-Suite Sales Marketing Next-Generation Customer Experience Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

OpenAI launches ChatGPT Enterprise

OpenAI is launching an enterprise version of ChatGPT in a move that competes with partner Microsoft.

In a blog post, OpenAI said it will launch ChatGPT Enterprise with the aim of signing up as many corporate customers as possible. ChatGPT Enterprise will provide faster processing and security and privacy controls for corporate data.

The move is notable given Microsoft has laid out plans to commercialize OpenAI's ChatGPT. OpenAI has plans to be an ingredient brand while growing out its own customer base. Salesforce is another example of a company that has built out similar services and added GPT to its branding mix.

Pricing for ChatGPT wasn't disclosed. OpenAI has a contact sales button without detail. OpenAI argues that it’s seen ChatGPT adopted in more than 80% of the Fortune 500 companies. These early users would be under individual ChatGPT Plus plans, which run you $20 a month, or free plans.

To woo enterprises, OpenAI made the following points about ChatGPT Enterprise:

  • OpenAI won't train models on your business data, conversations or usage.
  • ChatGPT Enterprise is SOC 2 compliant, and conversations are encrypted.
  • The service provides a new admin console to manage users.
  • ChatGPT Enterprise includes unlimited usages of GPT-4, 32k context, processing of larger inputs and files and advanced data analysis.
  • Custom ChatGPT instances are available with templates for chat and workflows. Pricing will also include free credits to use OpenAI's APIs.

OpenAI added that it has more features on the roadmap including customization tools with company data, a self-serve ChatGPT Business offering for small teams, power tools and functions for roles within a company.

Why enterprises will want Nvidia competition soon

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly.

What's good for Nvidia shareholders may give enterprise technology buyers pause. Nvidia needs competition.

Following Nvidia's blowout second quarter results, surging margins and crazy demand it's clear that the company has little competition and a lot of pricing power. Nvidia's second quarter gross margin was 70.1%, up 43.5% from a year ago. Third quarter gross margins will creep up to 71.5% to 72.5%.

The competition won't begin to show up until the fourth quarter when AMD launches its AMD Instinct MI300A and MI300X GPUs and AWS re:Invent likely features new Trainium and Inferentia chips.

Sure, there are a few enterprises buying Nvidia systems to train models on-premises, but most will buy computing power through the cloud. Either way, Nvidia is getting paid enough that it will double its fourth quarter revenue in two quarters, assuming it hits its third quarter sales guidance of $16 billion.

Here are the moving parts enterprises need to consider as they explore and scale generative AI efforts.

  • Model training isn't cheap and requires GPUs. Nvidia has pricing power on the infrastructure side.
  • Today, that pricing power is absorbed because hyperscalers are building out and consumer Internet and enterprises see productivity returns. More than 50% of Nvidia’s data center revenue in the second quarter derived from cloud service providers.
  • First movers will pay up for competitive advantage.
  • Nvidia has worked on its supply chain network for a decade or more and now appears to be able to meet demand.
  • Enterprises--especially those building their own infrastructure--will need to determine whether you go with Nvidia or consider options that'll be available in a few months.

To hear Nvidia CEO Jensen Huang tell it, there's only one choice. Not surprisingly, that choice is Nvidia. "The world has something along the lines of about $1 trillion worth of data centers installed, in the cloud, in enterprise and otherwise. And that $1 trillion of data centers is in the process of transitioning into accelerated computing and generative AI. We're seeing two simultaneous platform shifts at the same time," said Jensen.

Jensen continued:

"What makes NVIDIA special are: one, architecture. NVIDIA accelerates everything from data processing, training, inference, every AI model, real-time speech to computer vision, and giant recommenders to vector databases. The performance and versatility of our architecture translates to the lowest data center TCO and best energy efficiency."

It's hard to argue with Jensen now.

Going forward, the question will be whether everyone will need to train models with Nvidia or will something else do. Amazon CEO Andy Jassy will play the Nvidia game, but also noted there will be workloads for its own processors.

"Customers are excited by Amazon EC2 P5 instances powered by NVIDIA H100 GPUs to train large models and develop generative AI applications. However, to date, there's only been one viable option in the market for everybody and supply has been scarce,” said Jassy on Amazon's earnings conference call. “We're optimistic that a lot of large language model training and inference will be run on AWS' Trainium and Inferentia chips in the future."

You can also expect AMD to get training workloads too. AMD's value to the industry in x86 processors was being a counterweight to Intel. It will play the same role in GPUs and be formidable. AMD makes its case for generative AI workloads vs. Nvidia

AMD Instinct MI300A and MI300X GPUs are sampling to HPC, Cloud and AI customers with production in the fourth quarter. "I think there will be multiple winners. And we will be first to say that there are multiple winners," said AMD CEO Lisa Su.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Workday: Strong Q2, plans generative AI launches at Workday Rising

Workday raised its fiscal 2024 subscription revenue outlook as the company said it has more than 65 million users under contract.The company also said it will preview generative AI tools at Workday Rising.

The HCM and financials software company reported fiscal second quarter net income of 30 cents a share on revenue of $1.79 billion, up 16.3% from a year ago. Non-GAAP earnings were $1.43 a share in the second quarter.

Wall Street was expecting Workday to report second quarter non-GAAP earnings of $1.26 a share on revenue of $1.59 billion.

Workday's operating cash flow in the second quarter was $425.3 million, up from $114.4 million a year ago. 

Key points:

  • Workday said it has more than 5,000 core Workday Financial Management and Workday HCM customers.
  • The company saw retail and hospitality generating $1 billion in annual recurring revenue to join financial services.
  • Workday ended the quarter with cash, cash equivalents and marketable securities of $6.66 billion.

As for the outlook, Workday raised its fiscal 2024 subscription revenue guidance to $6.57 billion to $6.59 billion, up about 18% from the previous year. Third quarter subscription revenue will be about $1.68 billion. The company added that it is raising its non-GAAP operating margin target to 23.5%.

On a conference call, Carl Eschenbach, co-CEO, Workday, said the company is seeing deal scrutiny and has been focused on building out its management bench. Workday appointed Emma Chalwin, a former Salesforce executive, to chief marketing officer and last quarter hired Zane Rowe, formerly of VMware, as CFO.

Eschenbach also said that Workday has been focusing adding Financials customers and expanding its footprint. The US represents 75% of Workday's revenue. The company has been honing its sales, partner and go-to-market ground game. 

Workday's Aneel Bhusri, co-founder, co-CEO, said it has more than 3,000 customers sharing data with its machine learning models. Bhusri added that Workday will outline generative AI developments next month at Workday Rising.

Bhusri said Workday will preview copilot use cases as well as content generation and document understanding. "We believe that the enhanced AI and generative AI will enhance our win rates," he said. 

The co-CEO added that Workday plans to "offer generous usage based entitlements" for customers that opt-in to generative AI functionality. That approach could resonate with enterprise buyers, who are about to get hit with a bevy of generative AI upsells and add-ons

Bhusri said Workday isn't necessarily looking to charge for generative AI add-ons. Why? Because customers are sharing anonymized data in return for insights. Workday can use that data to train models. 

"The data is valuable to train LLMs and domain specific LLMs. We turn around and make our products more competitive," said Bhusri, who added that Workday is likely to create new products based on models. 

Constellation Research CEO Ray Wang said Workday's approach to data and generative AI makes sense. 

"Customers know that their ERP and HR systems securely store a lot of the training data that will power future AI innovations. It's good to see that Workday recognizes that this is the customer's data being used to create derivative insights and allow customers to remain loyal to Workday."

    Future of Work Data to Decisions Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity workday ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief People Officer Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

    Palo Alto Networks: Takeaways from a Friday afternoon treatise

    Palo Alto Networks held a fourth quarter earning conference call on a Friday after market close and wound up outlining a three- to five-year vision. Give Palo Alto Networks CEO Nikesh Arora props for theatrics and pep.

    The earnings turned out swell--especially when you consider the worst when a company delivers news on a Friday afternoon. On Aug. 18th shortly after 4:30 pm EDT, Arora kicked the call off.

    "We apologize to people who are inconvenienced but as we had mentioned in our press release, we wanted to give ample time to analysts to have one-on-one calls with us over the weekend, and we have a sales conference that kicks off on Sunday. We want to make sure all our information was disclosed out there. So again, we apologize for the unique Friday afternoon earnings call. But clearly, we have enjoyed the attention."

    We'll skip the actual fourth quarter results, but they were better than expected. Instead, it's worth focusing on a bevy of big picture items that made tuning in past Friday Happy Hour worth it. Here's a look at the big picture.

    Cybersecurity vendors are under scrutiny. For years, security companies have had blank checks from enterprises. Who wants to be seen crimping on security? Higher interest rates have changed the dynamics.

    "CFOs are scrutinizing deals, which means you have to be better prepared to answer their question and show the business value that you bring to them with your cybersecurity products," said Arora.

    And that business value is what exactly? For Palo Alto Networks it's having a platform approach that can consolidate vendors, contracts, licenses and maintenance. Arora said:

    "We can usually walk in and say, here, you can consolidate the following five, it doesn't cost you anymore, but you get a better outcome, and you get a modernized security infrastructure. So from that perspective, that strategy of ours is resonating. But there is more scrutiny. There are deals that go through multiple levels. There are some that get pushed. There are some that get canceled."

    Cybersecurity total addressable market expands. Arora argued that commerce, digital transformation, cloud computing, IoT, AI and every new advance cooked up only creates more threat vectors. Securing these advances is going to require integration. "We at Palo Alto Networks as well as, to some degree, different plays in the industry, started to look at the various parts of these markets and say, like, these things need to start getting integrated because you can't deliver great security outcomes without these things getting integrated," he said.

    Platforms matter. Palo Alto Networks isn't alone with its platform approach to security. Across the large technology vendors, it's all about platform. Platforms win because customers don't have time to integrate everything. Arora said:

    "It seems obvious now, but five years ago, we had customers who had more cybersecurity vendors than they had IT vendors. And it was a customer's responsibility to take these vendors, deploy them across their infrastructure, make them work together to deliver security outcomes."

    Bleeding edge because there's no choice. Arora said "you have to stay at the bleeding edge because you don't need your customers to be at the bleeding edge." "It is our responsibility as a security company to make sure we take all the innovation, we distill it, we make it work in an integrated fashion and deliver it to our customers at the fastest pace possible because the bad actors are not waiting," he added.

    Evergreen innovation. This concept from Palo Alto Networks is worth cribbing for the enterprise technology supply chain. "We want to maintain this notion of being an evergreen innovation company," said Arora. "You always have to be scanning the market understanding, where the world is going, where technology is going to see what potential security risks are going to get created in the adoption of that technology, in the deployment of that technology to make sure we're ahead of the curve, and we start delivering security by design."

    AI can't be wrong in security. Palo Alto Networks will focus on "precision AI" that can't be wrong. "We have to build a lot of our own models. We have to train them. We have to collect first-party data. We have to understand the data. Today, we collect approximately 5 petabytes of data. Yes, 5 petabytes of data on behalf of our customers and analyze it for them to make sure we can separate signal from noise and take that signal and go create security outcomes for our customers," said Arora.

    The takeaway: Palo Alto Networks plans to embed precision AI throughout its entire product line whether that's copilots, UI enhancements or insights.

    Digital Safety, Privacy & Cybersecurity Security Zero Trust Chief Information Officer Chief Privacy Officer Chief Information Security Officer

    Enterprise Tech News, 2023 ShortList Recap | ConstellationTV Episode 64

    On ConstellationTV episode 64, co-hosts analyst Liz Miller and Holger Mueller talk #tech news trends, then both Liz and Holger relay the releases of their new and updated Q3 ShortLists naming the leading vendor solutions in a wide range of coverage areas.

    00:00 - Introduction
    01:28 - Tech News Updates - cloud spending, AI trends and more
    13:03 - ShortList Updates from Liz Miller
    18:11 - Shortlist Updates from Holger Mueller
    24:37 - Bloopers
     

    ConstellationTV is a bi-weekly Web series hosted by Constellation analysts. The show airs live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday.

    Subscribe to our YouTube Channel: https://youtube.com/@UCs0vwq63PfnDZp1fMO0uDJg

    On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/8xquU-0zbv0?si=nL0CkofeJ-TUuxHQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

    Nvidia has pricing power: Q2 results surge, projects Q3 revenue of $16 billion

    There's generative AI money to be made--at least if you're selling the GPUs that power the compute. Nvidia posted another ridiculous quarter and delivered an outlook that looks like a misprint.

    Nvidia established itself as an AI growth darling last quarter. The second quarter will only bolster that position. The company reported second quarter revenue of $13.51 billion, up 101% from a year ago. Earnings were $2.48 a share and non-GAAP second quarter earnings were $2.70 a share.

    Wall Street was expecting Nvidia to report second quarter earnings of $2.09 a share on revenue of $11.22 billion.

    CEO Jensen Huang said Nvidia is benefiting from its GPUs as well as its Mellanox networking and switch gear. Infrastructure sales are booming as cloud providers and enterprises ramp up for generative AI workloads. Nvidia is seeing gains from its H100 AI infrastructure.

    Indeed, data center revenue was $10.32 billion, up 141% from a year ago. Nvidia outlined a series of partnerships in the quarter with the likes of Accenture, ServiceNow, VMware and Snowflake. How AI workloads will reshape data center demand

    Nvidia is also enjoying pricing power since rivals such as AMD are just ramping up AI efforts. For instance, Nvidia's second quarter gross margin was 70.1%, up 43.5% from a year ago.

    The outlook for Nvidia also blew away estimates. Nvidia said third quarter revenue will be about $16 billion. Gross margins are expected to improve to 71.5% to 72.5% in the third quarter.

    In prepared remarks, CFO Colette Kress said:

    "Data Center revenue was a record, up 171% from a year ago and up 141% sequentially, led by cloud service providers and large consumer internet companies. Strong demand for the NVIDIA HGX platform based on our Hopper and Ampere GPU architectures was primarily driven by the development of large language models and generative AI. Data Center Compute grew 195% from a year ago and 157% sequentially, largely reflecting the strong ramp of our Hopper-based HGX platform. Networking was up 94% from a year ago and up 85% sequentially, primarily on strong growth in InfiniBand infrastructure to support our HGX platform." 

    On a conference call, Kress said Nvidia's supply chain and manufacturing relationships have been built up over the last decade and those partnerships are paying off now. She said:

    "There is tremendous demand. Our supply chain partners have been phenomenal in supporting our needs. We have lined up additional capacity and components. We expect supplies to increase each quarter through next year."

    Kress noted that demand was strong among cloud providers, consumer Internet companies and enterprises, who are looking at on-premises generative AI workloads. "Virtually any industry can benefit from generative AI," she said.

    By the numbers:

    • Nvidia said it has authorized an additional $25 billion in stock repurchases. 
    • Gaming revenue was $2.49 billion, up 22% from a year ago. 
    • Automotive revenue was $253 million, up 15% from a year ago. 
    • Pro visualization revenue $379 million, down 24% from a year ago. 

     

     

     

    Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

    Snowflake Q2 better than expected

    Snowflake's second quarter results were better than expected as the company grew its customer base 25%. The company ended the quarter with 402 customers generating more than $1 million in revenue. 

    Snowflake reported a second quarter net loss of $226.87 million, or 69 cents a share, on revenue of $674 million. The company reported non-GAAP earnings of 22 cents a share. Wall Street was expecting Snowflake to report non-GAAP second quarter earnings of 10 cents a share on revenue of $662.28 million. 

    Product revenue for the quarter was $640.2 million in the second quarter, up 37% from a year ago. 

    CEO Frank Slootman said enterprise data is in the center of AI and machine learning efforts. "Enterprises and institutions alike are increasingly aware they cannot have an AI strategy without a data strategy," he said. 

    The quarter was a stabilization quarter after the first quarter spurred growth worries. 

    As for the outlook, Snowflake projected product revenue growth of 28% to 29% for the third quarter, or about $670 million to $675 million. For fiscal 2024, Snowflake is projecting product revenue of $2.6 billion, up 34% from a year ago. 

    Here's a look at the projections for Snowflake and how they fit into the long-term model. 

    Data to Decisions Innovation & Product-led Growth snowflake Chief Information Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer

    R "Ray" Wang on the Actual Risks of Generative AI

    The AI attacks on the security of corporations & countries will increase and get more complex & we need to prepare now..

    On <iframe width="728" height="410" src="https://www.youtube.com/embed/nZdnw62NV_Y" title="R. &quot;Ray&quot; Wang on The Actual Risks of Generative AI" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>