Results

Nvidia reports strong Q2, Hopper demand remains strong, Blackwell samples shipping

Nvidia reports strong Q2, Hopper demand remains strong, Blackwell samples shipping

Nvidia said its Blackwell processor samples are shipping to customers and demand for its Hopper platform remains strong. The company reported better-than-expected second quarter earnings as data center revenue continued to carry the team.

The company reported second quarter net income of $16.6 billion, or 67 cents a share, on revenue of $30.04 billion, up 122% from a year ago. Non-GAAP earnings for the second quarter were 68 cents a share. Wall Street was expecting Nvidia to report second quarter non-GAAP earnings of 64 cents a share on revenue of $28.73 billion.

Data center revenue in the second quarter was $26.3 billion, up 154% from a year ago.

As for the outlook, Nvidia projected third quarter revenue of about $32.5 billion with non-GAAP gross margins about 74.4% to 75%. For the third quarter, analysts were modeling earnings of 71 cents a share on revenue of $31.75 billion.

CEO Jensen Huang said:

"Hopper demand remains strong. Blackwell samples are shipping to our partners and customers. Spectrum-X Ethernet for AI and NVIDIA AI Enterprise software are two new product categories achieving significant scale, demonstrating that NVIDIA is a full-stack and data center-scale platform. Across the entire stack and ecosystem, we are helping frontier model makers to consumer internet services, and now enterprises."

Going into the quarter, Constellation Research CEO Ray Wang noted that Nvidia is the bellwether for AI and the spending on genAI infrastructure is still strong. “This buying cycle will continue for about two more quarters and then we’ll see a pullback,” said Wang. “That said Nvidia is priced for perfection right now”

Key themes from CFO Colette Kress:

  • “Data Center revenue was a record, up 154% from a year ago and up 16% sequentially. The strong sequential and year-on-year growth was driven by demand for our Hopper GPU computing platform for training and inferencing of large language models, recommendation engines, and generative AI applications. Sequential growth was driven by consumer internet and enterprise companies.  Cloud service providers represented roughly 45% of our Data Center revenue, and more than 50% stemmed from consumer internet and enterprise companies.”
  • “Networking revenue was $3.7 billion, up 114% from a year ago driven by InfiniBand and Ethernet for AI revenue, which includes Spectrum-X end-to-end ethernet platform.”
  • “We shipped customer samples of our Blackwell architecture in the second quarter. We executed a change to the Blackwell GPU mask to improve production yield. Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal 2026.  In the fourth quarter, we expect to ship several billion dollars in Blackwell revenue.”
  • “Inventory was $6.7 billion with 81 days sales of inventory (DSI). Purchase commitments and obligations for inventory and manufacturing capacity were $27.8 billion, including new commitments for Blackwell capacity and components. Prepaid supply agreements were $4.7 billion.”
  • The company has authorized another $50 billion to buy back shares.

Speaking on the earnings call, Huang made the following points:

  • Hopper demand remains strong even as Blackwell looms because the upgrade path is simple and companies are moving from traditional data centers. 
  • "Generative AI is a fundamental new form of computer science. It's affecting how every layer of computing is done, from CPU to GPU, from human engineered algorithms to machine learning algorithms, and the type of applications you could now develop and produce."
  • "We have to continue to drive the generational performance up quite significantly so we can drive down the energy consumed and drive down the cost necessary to do it."
  • "A generative AI company spends the vast majority of their invested capital into infrastructure so that they could use an AI to help them create products. And so these companies need it now."

Nvidia’s software game

Nvidia this week outlined a series of announcements that may highlight the company’s real moat: The ability to leverage its software ecosystem to boost performance. In addition, Nvidia is looking to make it easier for enterprises to bring generative AI projects from pilot to production.

Among the key highlights:

According to Nvidia, NIM Agent Blueprints are a jump start and designed to be modified and enhanced with what the company calls a "data-driven generative AI flywheel." NIM Agent Blueprints are free to download for developers and can be deployed via Nvidia AI Enterprise.

Nvidia's Justin Boitano, vice president of enterprise AI software products, said: "The first wave of generative AI was really the infusion of AI into internet scale services driven by makers of foundation models and expanded into productivity tools. The next wave is really starting now, and it represents a bigger business process transformation that's going to affect how teams work across an enterprise. AI is going to help teams reason through complex business decisions."

Constellation Research analyst Holger Mueller said: "Nvidia's NIM Agent Blueprints offering should enable enterprises to uptake AI faster to power their next generation applications. As expected the AI game is becoming a little more about the software (than the hardware)."

Previously:

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

CrowdStrike Q2 better-than-expected, sees subscription revenue hit due to outage

CrowdStrike Q2 better-than-expected, sees subscription revenue hit due to outage

CrowdStrike said it will take a $30 million subscription revenue impact for the third quarter and each of the remaining fiscal quarters in the fiscal year due to its recent outage. Fiscal 2025 revenue guidance “includes an estimated impact in the high-single digit millions to professional services revenue in the second half of fiscal year 2025.”

The security vendor said third quarter revenue will be between $979.2 million to $984.7 million with non-GAAP earnings of 80 cents a share to 81 cents a share. Fiscal 2025 revenue will be between $3.89 billion to $3.9 billion with non-GAAP earnings of $3.61 a share to $3.65 a share.

The outlook was the big theme as investors were looking for some quantification due to the outage. For the second quarter, CrowdStrike reported net income of $47 billion, or 19 cents a share, on revenue of $963.9 million, up 32% from a year ago. Non-GAAAP earnings were $1.04 a share in the second quarter.

Wall Street was expecting CrowdStrike to report second quarter non-GAAP earnings of 97 cents a share on revenue of $958.32 million.

Going into the results, all eyes were on CrowdStrike’s orders and potential liability due to its outage. The headlines haven’t been kind to CrowdStrike and rivals Palo Alto Networks and SentinelOne said they have seen a pickup in interest following the outage.

CrowdStrike outage fallout:

In a LinkedIn post, CrowdStrike CEO George Kurtz said:

"Our LogScale Next-Gen SIEM, identity protection, and cloud security hypergrowth businesses together surpassed $1 billion in ending ARR, growing more than 85% YoY. These results showcase CrowdStrike as cybersecurity’s AI-native consolidation platform of choice. Through our Falcon Flex licensing model, customers are adopting more of the Falcon platform than ever before."

That comment speaks to platformization and next-gen SIEM vendors consolidating, said Constellation Research analyst Chirag Mehta. 

The company said that the July 19 outage landed in the last two weeks of the quarter and it delayed deals. Most of those deals remain in CrowdStrike's pipeline. 

Speaking on a conference call, Kurtz said CrowdStrike has implemented new controls to prevent an outage. He apologized to customers and said:

"We've already implemented the following actions to build a more resilient Falcon platform. This ongoing work focuses on enhancing security and resiliency over the short, medium and long term."

Specifically, Kurtz said the company has done the following:

  • Enhance content visibility and controls. The company has released new content control configurations. 
  • Content QA enhancements with a new content validator and interpreter to prevent erroneous content. 
  • Improved content release process to mirror what CrowdStrike does with sensors.

"The July 19 incident starts a new chapter for CrowdStrike," said Kurtz. "We've immediately addressed learnings from the incident, and will continue to apply and evolve these lessons into our future."

Two key quotes from Kurtz on the CrowdStrike earnings call are worth noting because they address durability of the business as well as the Microsoft relationship.

  • "Obviously, there's a lot of noise in the marketplace and we can only control what we can control, and I think the best way for me to articulate that is to just recount some of the conversations. I had two customer calls this morning and most of them start out the same. They talk about our response, how transparent we were, and how we dealt with the problem. We talked about some of the mitigating steps that we've taken and it generally ends with we want to do more with CrowdStrike. I had one this morning, which was a customer that had an impact, we talked through it. They were satisfied with the controls we put in place, and in fact, on the call, they basically said we won the next-gen SIEM project they had, which we won against another next-gen SIEM competitor. So this is what we're seeing."
  • "We'll continue to work with Microsoft as part of the ecosystem as they look to provide further enhancements around kernel access, but just to be clear, there are thousands of software kernel drivers that are out there that go well beyond security, like VPN, virtualization software, IT management software, backup software and a lot more. So we are one part of the ecosystem, and we're certainly a player that's going to help and work with Microsoft as they think about adding other mechanisms to allow the ecosystem to flourish."
     

 

Digital Safety, Privacy & Cybersecurity I am Team Leader at the Nominee Organization (no vendor self nominations) Distillation Aftershots Security Zero Trust Chief Information Security Officer Chief Privacy Officer

Salesforce Q2 revenue growth 8%, touts Agentforce, CFO Weaver to step down

Salesforce Q2 revenue growth 8%, touts Agentforce, CFO Weaver to step down

Salesforce reported better than-expected second-quarter results but revenue growth was up 8%. CFO Amy Weaver will also step down.

The company reported second quarter net income of $1.43 billion, or $1.47 a share, on revenue of $9.32 billion. Adjusted earnings were $2.56 a share.

Wall Street was expecting Salesforce to report earnings of $2.36 a share on revenue of $9.23 billion. Wall Street is expecting Salesforce to report single digit revenue growth rates for the foreseeable future.

As for the outlook, Salesforce said third quarter revenue will be $9.31 billion to $9.36 billion, up 7%. Full year revenue will be $37.7 billion to $38 billion.

Salesforce CEO Marc Benioff said the company is operating with discipline and boosting operating margins. “With our new Agentforce AI platform, we’re reimagining enterprise software for a new world where humans with autonomous Agents drive customer success together,” said Benioff.

Weaver will remain CFO of Salesforce until a successor is found, the company said.

By the numbers for the second quarter:

  • Sales Cloud revenue was up 10% from a year ago.
  • Service Cloud revenue was up 11% from a year ago.
  • Platform and other was up 10% from a year ago with marketing and commerce up 7%.
  • Integration and analytics was up 14%.

Key points from Benioff on the conference call:

  • Multi-cloud deals accounted for close to 80% of new business in the quarter.
  • There were 25 trillion Einstein transactions across Salesforce clouds. 
  • AI is top of mind, but customers are overthinking it. 

Benioff elaborated on the do-it-yourself genAI trend. He said:

"I think that there's a lot of misconceptions about AI with my customers. I have been very disappointed with the huge amount of money that so many of these customers have wasted on AI. They are trying to DIY their AI. It's not unlike when we first saw cloud emerge. Customers feel like have to roll their own, build it themselves, get in the weeds, and try to figure to figure it out. This is a moment where every customer is needs to realize you don't need the DIY your AI. You can use a platform like Salesforce to get the highest efficacy of artificial intelligence."

The bet on Agentforce

Speaking on the earnings call, Benioff said genAI is positioning the company for the future and AgentForce is going to drive growth. Benioff said:

"Dreamforce is really becoming Agentforce," said Benioff. "I think this is going to be a moment that everyone is going to have to see in person understand what is going on. We're going to show Agentforce and how we've reimagined enterprise software for this new world of autonomous AI."

Benioff said:

"We're building the agents for Workday, and we're going to be building custom agents for so many of you as well. Agentforce is a development platform, as well as this incredible capability to radically, extend your sales and service organizations. We're all going to have Agentforce and we're going to have them at scale automating the entire workflow on their own."

Constellation Research analyst Martin Schneider said Agentforce represents a new foray for Salesforce. In a blog post, he said:

"The announcement hints at a demarcation point in Salesforce's AI strategy. To reduce it to simple terms, the company has now entered the automated and autonomous phase; as it has evolved from Einstein insights, to generative AI copilots, to these new agents. And to be clear, Salesforce is intently pointing out that these are NOT "bots." These new agents are more outcome oriented, and are designed to handle far more complex and personalized interactions that your typical web chatbot. These agents should, in theory, have far more data points to work with to make this a reality."

Jabs at Microsoft

Benioff said Agentforce can win over enterprises--especially those disappointed with Microsoft Copilots. He said:

"With our new Agentforce platform, we're going to make quantum leap forward in AI. I want you to have your hands on this technology to really understand this. This is not about copilots. So many customers are so disappointed at what they bought from Microsoft with Copilots because they're not getting the accuracy of the response that they want. Microsoft has disappointed so many customers with AI.  These agents are autonomous. They're able to act with accuracy. They're able to come right out of the box.

By the end of the fiscal year, we will have thousands of customers on this platform. The early trials have been remarkable."

Data to Decisions Next-Generation Customer Experience Future of Work Innovation & Product-led Growth New C-Suite Marketing Transformation Digital Safety, Privacy & Cybersecurity salesforce Chief Information Officer Chief Marketing Officer

Supermicro's really bad, horrible week

Supermicro's really bad, horrible week

Supermicro said it won't be able to file its annual report on time "without unreasonable effort or expense" because it needs to assess its "internal controls over financial reporting."

That disclosure lands shortly after Hindenberg Research issued a scathing report about Supermicro.

Needless to say, Supermicro's stock hasn't fared well since investors are more likely to sell first and ask accounting questions later. Supermicro shares are trading at $405, down about 25% on the day. Supermicro peaked at $1,229 following earnings surprises due to the generative AI buildout. Simply put, Supermicro looked like one of the few winners of genAI trickledown economics given most of the gains have gone to Nvidia. Supermicro is planning on a 10-for-1 stock split.

The company did say that its fourth quarter results and outlook weren't affected by the annual report delay. Supermicro said its fiscal 2025 revenue will be between $26 billion to $30 billion, up from $15 billion in fiscal 2024.

On Supermicro's fourth quarter conference call, CEO Charles Liang said the company was looking to scale aggressively via direct liquid cooled systems to hyperscalers and enterprise customers. Supermicro wants to be a complete AI data center provider.

"Our Taiwan capacity is getting bigger and Malaysia capacity will be ready. So, we're fully ready for large-scale datacenter customer, but we will be selective. So that's why we foresee only $26 billion to $30 billion. If we try to be more aggressive on a large scale, our growth can be even faster than that. But we try to grow in both ways enterprise and large-scale datacenter kind of try to balance so to maintain our healthy profitability."

Liang did face some skepticism about Supermicro's expansion. One analyst noted that the company doubled revenue in fiscal 2024, but had negative cash flow of $2.6 billion. The question is whether Supermicro has to burn another $2.6 billion or so to double revenue in fiscal 2025.

In response to the cash burn question, Liang said the company may not burn that much cash but could if it chases market share.

What remains to be seen is whether Supermicro's annual report delay is a sign of accounting issues similar to what the company saw in 2018 or just a blip. It's also possible that Supermicro is part of the unraveling of enthusiasm over the genAI infrastructure buildout.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Salesforce AI Agents, Talent Acquisition, Post-Breach Resilience | ConstellationTV Episode 87

Salesforce AI Agents, Talent Acquisition, Post-Breach Resilience | ConstellationTV Episode 87

In ConstellationTV episode 87, NEW co-hosts Martin Schneider and Larry Dignan unpack recent enterprise tech news, including Salesforce's introduction of two new AI agents: a sales development agent and a sales coaching agent, aimed to enhance lead qualification and sales coaching by leveraging multi-channel communication and AI-driven insights.

Then Larry interviews SuperNova finalist Cari Bohley from Peraton about the importance of AI in talent management, highlighting Paraton's use of AI to improve retention and career development.

Finally, Constellation analyst Chirag Mehta unpacks the impact of AI on cybersecurity, emphasizing the need for post-breach resilience and the challenges of securing generative AI models.

00:00 - Meet the Hosts
00:21 - Enterprise Tech News updates (Salesforce AI, earnings, market trends)
08:08 - Challenges and Surprises in AI Talent Acquisition | Interview with Cari Bohley
24:10 - Cybersecurity Trends for the Fourth Quarter | Interview with Chirag Mehta
34:14 - Bloopers!

ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

__

ConstellationTV Episode 87 transcript (News segment): Disclaimer - This transcript is not edited and may contain errors

Well, one of the most interesting things that popped over the last week was Salesforce really advancing a lot of their AI vision. Interesting that they're announcing this ahead of Dreamforce, but they've really kind of taken the next step, you know, they started out with kind of Einstein analytics, and they went to their Gen AI stuff with the Einstein one co pilots. Now they're bringing out what they're calling agents. AI agents, right? They've got two that they're going to be announcing and bringing out at Dreamforce. One is a sales development agent, so an SDR, virtual SDR, think of that. And then the other one's kind of a sales coaching. Really interesting for these two choices, because you've really you're looking at, on the one hand, with the with the sales Dev, a lot of kind of repetitive, you know, question based concepts of like finding out, you know, how do you qualify leads? Is there a budget? Are you a decision maker? All these types of things, but also being able to do that in a really interesting way, across channels, right? So where most SDRs, the human ones, are making phone calls and maybe just sending out a bunch of, you know, blast and hope emails. You know, these can be done through WhatsApp and through other channels, and, you know, maybe even slack over time, right? Because it's a Salesforce property, right? And you're, you're looking at this ability to actually go out there and asynchronously and in interesting ways, qualify leads and then route them using Salesforce workflow to the appropriate sales rep, right? So instead of kind of batching it, doing it with a lot of like, hey during business hours, we can call a bunch of people. This can be 24/7, asynchronous, multi channel. Really interesting there, right?

And then on the coaching side, what you're seeing is a really interesting play where you know, sales coaches, sales methodologies, all these types of things, and even guided selling. Ai started to get into that, right? But when you actually have this kind of one on one coach, really following everything you do, because a human coach really can't understand every call you make, every email you send. You know, what is your cadence? You know how what is the white space between when you engage, when you don't engage, when you should be engaging, right? It really can understand that better than any human and at scale. So I think they really pick two really smart and really strong, you know, first agent choices there to go out with. And the great thing is, is, again, with Salesforce as a platform, you're going to be able to kind of build these on your own, leveraging their tool set and make your own agents for different purposes, jobs to be done, all that kind of stuff. So it's just stage one. It's interesting that they're announcing it well ahead of Dreamforce, but I think they're taking a really interesting first mover advantage among a lot of the CRM players out there. So it's pretty interesting.

What's interesting about that to me is this is sort of the second time in like a week and a half where I've heard talk about Gen AI as, like a behavior coach and walk you through difficult conversations and coach you up for sales and things like that. I mean, I assume there's, there'll be ROI metrics for those, those kind of coaching, sales, coaching agents. But yeah, it's interesting stuff. Yeah, the stuff I'm looking at when it comes to Salesforce is, are they going to grow and their earnings are about to land on a day that brings Salesforce, Nvidia, CrowdStrike and a few others. Why they don't pace these out better? It's beyond me. But, you know, there's, there's been some grumbling about Salesforce trying to, you know, sell multiple clouds and all that kind of stuff. So I'm really going to just look closely for the growth. You know, Salesforce is sort of in that land of single digit growth right now, and it's. Really unclear to me, what, you know, what's going to juice those sales? I mean, I assume it would be data cloud, but you know, it's, it's, it's a little tricky to see how that company's going to accelerate growth without an acquisition, which also gets the activist investors all Well, sure, yeah. I think one of the interesting things they pointed on a briefing the other day was they're seeing, you know, the customers that they have that are utilizing AI well, are seeing like 30% revenue growth, like there's they're hitting that high double digit growth. So again, it comes down to value and value receipt. So if they can build an ROI model out there that, like the their AI vision, combined with their ability to really either keep flat your human costs, but also be more productive. There's a lot of ROI in that that they can be providing downstream and proving that out Well, finally, in a way where, like, commodified sales automation rarely could do in a really, like non dotted line, way right, like really a straight line. So it could be an interesting tipping point for them to really see some growth again, because they're actually going to be able to show some value, big if, but it could be huge for them.

The other earnings I'm watching is CrowdStrike. When they report it's basically, what are these lawsuits going to cost? It sounds like a lot of it was not insured, per se, but the damages that could, you know, you could hit CrowdStrike with, are kind of limited, right? There's the one, you know, the big Delta, CrowdStrike, Microsoft spat. You know, been reading the docs going back and forth on that, but you know, there's gonna be a bunch of those lawsuits popping up. So that's the thing to look for there. The other one, naturally, the whole markets waiting for it, and that's the videos earnings, right? It's basically all about Nvidia. Nvidia is probably the only company to date that's raked in a ton of dough on Gen AI, sure. And, you know, they do have some delays, and they're in the latest chips and all that. But I'm not sure it's gonna matter too much. And, you know, AMD made this acquisition the other day, building out, you know, their integrated system designs and things like that. So, you know, the competition's coming up to Nvidia, but right now, I think it's kind of free sailing for them.

The problem is just that they're a price for perfection. Well, of course, and that's and that's an issue. But what you're seeing is interesting. Like, when you actually look at, like you said, AMD, zt, you're also seeing rebellions and sapien in Korea merging, again. Like, everyone is going to take a run in Nvidia and like, how do you, you know, how do you become the value provider, right? Of like, where's the commoditization of AI powering, you know, of like, where is that like? Because, you know, like, think, let's, like, take a look back at Amazon, how many years ago, right, when it was, like, commodity commodifying cloud, right, when people were doing a lot of that themselves, right? You know, like, think about what Salesforce could have been if it had an Amazon 20 years ago, right? So who's going to go and take that, that, that incredible disruption role there, right? Because, like you said, right now, it's so expensive, and look, they've been reaping the benefits, but someone's going to disrupt it, right? Exactly, all right.

Well, thanks for the news, Martin. And then coming up next, we have a talk with supernova Award finalist, Carrie Bowie, she's vice president of talent management at a company called paraton, and it's really a cool company to do all the scientific engineering. DoD is a customer, and it was kind of built by merger, and now they're basically combining the talent pool and trying to, you know, find the right skill sets for for what they're doing. So it looks pretty cool there. And then to end the show, we're going to talk to Sharad Mehta about some of the cybersecurity stuff you need to think about for the fourth quarter. 

View the full ConstellationTV Episode 87 transcript here

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/G_wc0EvvCdE?si=o-BNEVOaMCkkNX9v" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Nvidia shows H200 systems generally available, highlights Blackwell MLPerf results

Nvidia shows H200 systems generally available, highlights Blackwell MLPerf results

Nvidia said its H200-powered systems are generally available and will launch on CoreWeave as the first cloud service provider and server makers including Asus, Dell, HPE, QTC and Supermicro.

The general availability comes as Nvidia released MLPerf Inference v4.1 benchmarks for its Hopper architecture and H200 systems and its first Blackwell platform submission.

MLPerf Inference 4.1 measured the Nvidia platform across multiple models including Llama 2 70B, Mistral 8x7B Mixture of Experts (MoE), Stable Diffusion and other models for recommendation, natural language processing, object detection, image classification and medical image segmentation.

"We did our first-ever Blackwell submission to MLPerf Inference. Not only was this the first Blackwell submission ever to MLPerf it was also our first FP4 submission," said Dave Salvator, Director of Accelerated Computing Products at Nvidia.

Key benchmark points:

  • Nvidia Blackwell hit new per GPU performance records on Llama 2 70B with 10,756 tokens/second single GPU, four times the performance of the previous generation. Offline performance was 3.7 times the performance.

  • Nvidia Blackwell and FP4 on MLPerf Inference delivered high performance and accuracy on the Quasar Quantization System.
  • HGX H200 and NVSwitch delivered 1.5x higher throughput compared to H100 on Llama 2 70B.

  • Nvidia also emphasized that it is using software to improve performance by up to 27% on H100 systems in the field as well as H200. Nvidia said it has improved HGX H200 performance by up to 35% in one month on Llama 3.1 405B via software improvements.
  • Jetson AGX Orin, Nvidia's edge platform for transformer models, saw a 6x boost in performance due to software optimization.

"The amount of software enabling and ecosystem enabling it takes to deliver great AI performance is really significant," said Salvator. "Yes, we make amazing hardware and architectures, but the software enabling is what really makes the big difference in terms of being able to realize the potential of the hardware. In any given generation of our GPUs and total platform we typically get between 2.5x more performance with ongoing software tuning."

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Verifiable Credentials and the Story Behind a Credential

Verifiable Credentials and the Story Behind a Credential

Verifiable credentials are one of the most important elements of digital identity today.

What exactly does a verifiable credential verify?

And while we’re on the subject, what is a credential anyway?

Let’s start with existing analogue credentials. Thanks to the vagaries of the English language, “credential” can be a verb or a noun.

Credentialing

The noun credential usually refers to “a qualification, achievement, quality or aspect of a person’s background, especially when used to indicate their suitability for something” (Ref: Oxford Languages).

There’s a subtle implication in the everyday use of the word. A credential is generally associated with criteria for its particular quality and suitability. These criteria are not always visible, especially to laypeople, but we sort of know they’re there!

Consider professional credentials.  A would-be accountant for instance must obtain a particular degree by passing certain tests set by a university. In addition, that degree must be approved by a professional accounting body.

So, in this sense, every credential is an abstraction which represents that the holder has satisfied certain rules. Every important credential has meaning and context.

As a verb, “credential” means to provide someone with credentials.  This might seem obvious, but I think it’s the more important sense of the word.

A credentialing process is a formal (rules-based) sequence of events, which has usually been designed to approve the holder for undertaking specific activities. There is a tight relationship between the credentialing process and the intended use of the credential.

Examples include the onboarding of new employees, enrolment in university courses, admission to professional associations (including recognition of international qualifications), approval of journalists to attend special events such political conventions, security clearances, and nations’ citizenship requirements.

Credentialing processes are famously conservative. They are the sovereign concern of nations, and mission critical to academic institutions and professional societies. Right or wrong, professional credentials are notoriously provincial and difficult to have recognized between jurisdictions. Credentialling bodies zealously represent their communities of interest and they reserve the right to set rules as they see fit.

Going from physical to digital credentials

Traditionally, many credentials are physically manifested as cards, membership tokens and other badges, used by the holder to prove their status to others. These items provide a number of cues  that a credential is genuine, the issuer is legitimate, and the credential hasn’t been modified. Some credential cards include photographs which help to show that the credential is in the right hands when presented.

By the way, the plastic card itself is sometimes called a “credential”. It is more useful to think of it as a carrier or container of credentials, especially as we shift from analogue to digital. 

Yet in the move to digital, most credentials in the abstract sense retain their essential meaning.  

For example, a government-authorized Medicare provider can sign off on healthcare treatments to be reimbursed. That provider should be able to assert precisely the same authority in digital health workflows.

Credit cards as credentials

A credit card is a token which signifies that the holder is a paid-up member of a payment scheme. The principal data carried by a credit card is a specially formatted number (known as the Primary Account Number or PAN) which encodes membership of the scheme, identifying the cardholder, the scheme and the issuing bank. Note that a credit card is a container that usually carries just one credential.

Credit card numbering has remained unchanged for decades. With the introduction of e-commerce, shoppers were able to use their card numbers online, thanks to Mail Order / Telephone Order (MOTO) rules. These has been established years before e-commerce, to allow merchants to accept plaintext card numbers in card-not-present (CNP) settings.

To combat CNP fraud, the Card Verification Code (CVC) was introduced — an additional number on the back of the credit card that would not be registered by merchants’ card imprinting machines and then vulnerable to dumpster diving identity thieves.

The CVC is a classic example of security metadata — an extra signal used to confirm the data that really matters (in this case, the credit card number). Credit card call centre operators had access to back-office lists of PANs and matching CVCs; if a caller could quote the CVC correctly, it was assumed they had the physical card in their hands. 

Enter cryptography

Verifiable credentials (VCs) are the strongest mechanism today for asserting important personal attributes, such as driver licences, professional qualifications, vaccinations, proof of age, payment card numbers and so on. VCs are central to the next generation European Union Digital Identity (EUDI), the ISO 18013-5 standard mobile driver licences (mDLs) and the latest digital wallets.

Several new VC data structure standards are under development, including the World Wide Web Consortium (W3C) VC data model and ISO 18013-5 mdocs. Older forms of VC include cellular network SIM cards and chip payment cards.

All forms of VC include the following:

  • information about a particular “Subject” (usually a person, also referred to as the credential holder) such as a licence number, account number or other ID
  • the digital signature of the issuer
  • usually a public key of the Subject, used to verify signed presentations of the VC made from a cryptographic container or wallet
  • metadata about the credential, such as its validity period and the type of container or digital wallet it is carried in, and
  • metadata about the issuer, such as a company legal name, corporate registration number, Ts&Cs for credential usage etc.

The digital signature of the issuer preserves the provenance of a verifiable credential: anyone relying on the VC can be assured of its origin and be confident that the credential details have not been altered.

When a VC is presented from a cryptographically capable wallet, a message or transaction incorporating the credential can also be digitally signed using a private key unique to the credential and carried in the container or wallet. This assures the receiver that the credential as presented was in the right hands.

This verifiable presentation proves the proper custody and control of the credential and is just as important as verifiability of a credential’s origin.

Telling the story behind the credential

Provenance and secure custody are unique assurances provided by verifiable credentials, but I think the greater power of this technology lies in the depth of the metadata it provides.

VCs deliver rich ‘fine print’ about all sorts of things including the credential, the issuer, the wallet or container, and the way in which it was presented, all of which is reliably bound together through digital signatures.

So, whenever you use a VC to access a resource or sign a piece of work, you leave behind an indelible mark that codifies the history of your credential.

As mentioned, a credential is issued through a formal process, and is recognized by a community of interest as signifying the suitability of its holder for something.

For a person to hold a verifiable credential in a personal cryptographic wallet, a series of specific steps must have taken place.

First and foremost, the Issuer will satisfy itself that the Subject meets all the credentialling requirements. A VC usually includes a public key unique to the Subject and their wallet; this physicality means the Issuer can be sure that it hands out its credentials only to the correct individuals. It also allows the Issuer to specify the precise type of device(s) used to carry its credentials — all the way down to smart phone model and biometric performance if those things matter under the Issuer’s security policy.

Virtual credit cards in digital wallets

Continuing our look at credit cards as credentials, the provisioning of virtual credit cards to mobile wallets illustrates the degree of control that a VC issuer has over the end-to-end process.

Typically, a virtual credit card is provisioned to a digital wallet via a mobile banking app running on the same device. Banks control over how their apps are activated. Almost anyone can download a banking app from an app store but only a genuine customer can get the app to do anything, following their bank’s prescribed activation steps (which might include e.g. entering account specific details, calling a contact centre, or even visiting a branch for additional checks). Only then will the bank send secure instructions to the device to load a virtual card. The customer will need to unlock their phone (by biometric or PIN) to complete the load.

Behind the scenes, any bank offering mobile phone credit cards must have also made prior arrangements with the phone manufacturer to gain access to the hardware. Apple and Google (the major digital wallet platforms) undertake rigorous due diligence so that only legitimate banks are granted this all-important power.

All this history is coded as metadata into the verifiable credential. When a merchant point-of-sale system receives a signed payment instruction from a digital wallet, we can all be sure that:

  • the digital wallet has been unlocked by someone who controls the phone
  • the credit card is genuine and was issued by the bank indicated in the credential
  • the card was loaded to the wallet by a customer who was approved to use the mobile banking app and was authenticated to do so (making it highly likely that the mobile phone customer and the cardholder are the same person)
  • the cardholder is a registered customer of the bank and has passed that bank’s KYC processes.

The VC can include the type of phone it is carried in; it is even possible for the VC to record if the virtual card was issued remotely or in-person.

Minimalist VCs

The acute problem with online authentication today—often given the catch-all label “identity theft”— arises from the use of plaintext credentials and identifiers.

There are countless scenarios where a counterparty needs to know you have a particular credential, but if the only evidence you can provide is a plaintext number, then businesses and individuals alike are sitting ducks because so many identifiers have been stolen in data breaches and traded on black markets.

The simplest, lowest risk solution is to conserve the important IDs we are all familiar with, but harden them in digital form, so they cannot fall into criminal hands.

That might sound complicated, but we have done it before!

The payment card industry transitioned from magnetic stripe to chip for the same reason: to eliminate plaintext data.  Chip cards present cardholder data through digitally signed verifiable messages — making them one of the earliest examples of verifiable credentials.

Digital wallets use the same technology as chip cards and are rapidly taking over from plastic. The Reserve Bank of Australia reports that well over one third of card payments by Australian consumers are now made through mobile wallets. Similar rates of digital wallet take-up is seen post-COVID around the world.

Through the course of this technology upgrade, the meaning and business context of credit cards were unchanged. The conservation of credentialing processes was key to the chip revolution.

Minding your business!

In any digital transformation, it is not the new technology that creates the most cost, delay and risk; rather it’s the business process changes. The greatest benefit of verifiable credentials is they can conserve the meaning of the IDs we are all familiar with, and all the underlying rules.

The real power of VCs lies not in what they change but what they leave the same!

A minimalist verifiable credential carrying a government ID means nothing more and nothing less than the fact that the holder has been issued that ID. By keeping things simple, a VC avoids disturbing familiar trusted ways of dealing with people and businesses. 

Minimalist VCs could be issued by governments almost immediately, to carry for example social security numbers, national IDs, Medicare numbers, immigration status and/or health information, as applicable. Digital workflows created using these VCs are faster, more accurate, more durable, lower cost, and much much harder to impersonate, counterfeit or tamper with.

Powerful digital wallets are being rapidly embraced by consumers; modern web services are able to receive credentials from standards-based devices. We are ready to transform all important IDs from plaintext to verifiable credentials. Most people now could present any important verified data with a click in an app, with the same convenience, speed and safety as showing a payment card. With no change to backend processes and credentialing, we would cut deep into identity crime and defuse the black market in stolen data. 

 

Digital Safety, Privacy & Cybersecurity Chief Data Officer Chief Digital Officer Chief Financial Officer Chief Information Officer Chief Information Security Officer Chief Supply Chain Officer Chief Technology Officer

SentinelOne CEO: 'We're seeing distinct rise in customer interest' after CrowdStrike

SentinelOne CEO: 'We're seeing distinct rise in customer interest' after CrowdStrike

SentinelOne CEO Tomer Weingarten said the company is seeing "a distinct rise in customer interest and appreciation" for its Singularity Platform following the CrowdStrike outage.

Weingarten outlined multiple thoughts about the cybersecurity industry following the CrowdStrike outages. SentinelOne, a smaller rival to CrowdStrike and Palo Alto Networks, said its second quarter demand was broad based. The company reported a second quarter net loss of $69.18 million, or 22 cents a share, on revenue of $199 million. Non-GAAP earnings in the second quarter were a penny a share.

In the shareholder letter, Weingarten said:

"Little has changed in recent months from a macroeconomic perspective, yet a lot has changed in the cybersecurity industry. These are unprecedented times—the frequency, complexity, and costs of cyberattacks are reaching new heights. At the same time, the performance shortcomings of other market offerings are becoming visible to the public."

CrowdStrike outage fallout:

Obviously referring to CrowdStrike's outage along with Microsoft, Weingarten said:

"The latest global IT outage highlights the significance of platform architectures, process controls, and building resilient security operations. The scale and disruption caused by this incident is a stark reminder of the risks posed by vendor concentration. The cost of protection should never exceed the consequences of a breach.

A key lesson our industry has learned is the importance of product architecture. Understandably, customers and partners are now looking for better platform architectures and building more resilient cyber-defenses. Some of the largest enterprises in the world are now evaluating and appreciating the Singularity platform’s breadth and superiority relative to the competitive offerings – and they are positively surprised."

Speaking on an earnings call, Weingarten said enterprises are looking to diversify their cybersecurity vendors including CrowdStrike. He said:

"We've already seen customers choosing to move away. Some of them have moved away already to SentinelOne, some of them are in the process, some of them will take time to assert, but I think everybody is considering their next steps. And obviously, as you can imagine, that bodes well up to SentinelOne. With that, I would also be mindful that sales cycles take nine to 12 months. Nobody wants to rip off something immediately. Some folks do, but that's not the majority.

I think for the rest of the customer base, just decisions -- they're going to play out over time. I think people are looking at us, obviously, the number one alternative. People are looking to diversify risk and not really concentrate more and more capabilities with one vendor."

SentinelOne's comments come a week after Palo Alto Networks reported earnings. Palo Alto Networks CEO Nikesh Arora commented on the CrowdStrike outage. He said:

“That was a tough event that simultaneously impacted 10s of millions of users, which is unfortunate. I appreciate the way CrowdStrike handled but at the same time, it caused two things to happen. One, customers are asking us ‘if you have the same product how do you deploy?’ We have a fundamentally different way with updates. We were able to articulate that and even though some customers were busy remediating that issue we got our deals done with them. It's kind of interesting. The other thing the outage did was cause customers to step back and say, ‘wait a minute. I need to make sure that I'm evaluating all the XDR opportunities in the market. It's exciting because customers are willing to give us consideration on the XDR space.”

As for the outlook, SentinelOne is projected third quarter revenue of $209.5 million and fiscal 2025 revenue of $815 million.

 

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer

Box Q2 strong, raises outlook for Q3

Box Q2 strong, raises outlook for Q3

Box reported better-than-expected second quarter results with revenue growth of 3% due to currency fluctuations and an improvement in remaining performance obligations (RPO) and billings. Box also raised its outlook for the third quarter.

The company reported second quarter earnings of 10 cents a share (44 cents a share non-GAAP) on revenue of $270 million, up 3% from a year ago. Wall Street was expecting Box to report non-GAAP earnings of 41 cents a share on revenue of $269.2 million. A third of Box's revenue is generated outside the US and 60% of that business is in Japanese Yen. In constant currency, Box's revenue growth would have been 6%.

As for the outlook, Box projected third quarter revenue between $274 million to $276 million with non-GAAP earnings of 41 cents a share to 42 cents a share. For the third quarter, analysts were modeling Box to report non-GAAP earnings of 39 cents a share on revenue of $270.72 million.

Box also said it repurchased 3.9 million shares for $102 million. The company said it is allocating another $100 million to the stock buyback effort.

For fiscal 2025, Box said revenue will be between $1.086 billion to $1.09 billion, up 5% from a year ago. Non-GAAP earnings will be between $1.64 a share to $1.66 a share.

In a statement, Box CEO Aaron Levie said the company's Box AI and acquisition of Alphamoon means the company can address more use cases. "The Box Intelligent Content Cloud can now support more use-cases across the enterprise than traditional ECM, dramatically expanding our market opportunity," he said.

Box recently named Tricia Gelman CMO of Box. Gelman had been a marketing executive at Salesforce and Adobe.

Data to Decisions Future of Work Tech Optimization Marketing Transformation Innovation & Product-led Growth Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity box Marketing B2B B2C CX Customer Experience EX Employee Experience AI ML Generative AI Analytics Automation Cloud Digital Transformation Disruptive Technology Growth eCommerce Enterprise Software Next Gen Apps Social Customer Service Content Management Collaboration Machine Learning LLMs Agentic AI business SaaS PaaS IaaS Enterprise IT Enterprise Acceleration IoT Blockchain CRM ERP finance Healthcare Chief Information Officer Chief Marketing Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Cerebras Systems launches Cerebras Inference, touts performance gains over Nvidia H100 systems

Cerebras Systems launches Cerebras Inference, touts performance gains over Nvidia H100 systems

Cerebras Systems, a startup focused on building AI systems and processors, launched Cerebras Inference, which the company claims is 20 times faster than Nvidia GPU-based instances in hyperscale clouds.

The startup has filed confidential plans to go public in the second half of 2024. Cerebras Inference is available via Cerebras Cloud or as an on-premises system. The Cerebras news lands as Nvidia is highlighting how it is optimizing software to boost performance of its GPUs and integrated stack. Meanwhile, AMD bought ZT Systems to build out its AI infrastructure designs.

According to Cerebras, Cerebras Inference delivers 1,800 tokens per second with Llama 3.1 8B and 450 tokens per second for Llama 3.1 70B. Cerebras Inference starts at 10 cents per million tokens.

In a blog post, Cerebras outlined how Cerebras Inference is different than existing architectures. While much of the focus for generative AI revolves around training models, inference is going to constitute a large chunk of workloads. Those inference workloads will be allocated based on business needs and price/performance. For instance, AWS with its Trainium and Inferentia chips, Google Cloud with its TPUs and other players such as AMD may look appealing vs. Nvidia.

Constellation Research analyst Holger Mueller said:

"We are in the custom hardware acceleration phase of AI – and we see what can be done by Cerebras. Holding complete models in SRAM provides better performance for inference and is likely going to change what the go-to inference architecture is going to be. And this development is one more data point illustrating how training and inference platforms are diverging on the spec side. The risk for custom inference platforms is getting smaller by the quarter, as the nature of the transformer model outputs remains the model of choice for genAI, while the inference market keeps doubling quarter after quarter: There needs to platforms that run all the inference."

Cerebras in its post compared its Cerebras Wafer Scale Engine 3, its AI processor, to Nvidia H100 systems. Note that Nvidia is rolling out H200 systems and Blackwell later. Cerebras' CS-3 system is an integrated AI stack that can be scaled out.

Cerebras' inference service is available on a free tier, which offers API access and generous usage limits; a developer tier with models priced at 10 cents and 60 cents per million tokens and an enterprise tier that includes fine-tuned models, custom service level agreements and dedicated support.

Meanwhile, the company has also been building out its executive team. Cerebras added Paul Auvil and Glenda Dorchak its board of directors. Auvil was recently CFO of VMware and Proofpoint. Dorchak is a former IBM, Intel and Spansion executive. Both were added for their technology and corporate governance experience. In addition, Cerebras named Bob Komin CFO. He previously served as CFO of Sunrun, Flurry and Tellme Networks.

Related:

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer