Results

The real reason Windows AI PCs will be interesting

Microsoft and its merry band of PC makers launched AI PCs at scale. The launch of Copilot+ PCs, ahead of Microsoft Build 2024, was notable for a host of reasons once you get past how executives were a bit obsessed with outperforming Apple's MacBook.

Consider:

  • The first Copilot+ PCs will launch with Qualcomm's Snapdragon X Elite and Snapdragon X Plus processors. The rollout of AI PCs is big for Qualcomm and the Arm PC ecosystem.
  • Microsoft did say that it will have AI PCs powered by AMD and Intel paired with Nvidia and AMD GPUs. AMD CFO Jean Hu said at an investor conference that the chipmaker will outline its AI PC processors in "coming weeks." "It's a very exciting product and very competitive to power the AI applications in the PC market. We do believe AI PC is a very significant inflection point. It will potentially help refresh the PC market," said Hu.
  • Businesses can easily add AI PCs to the mix with the same controls they do today. Copilot+ PCs are billed as a productivity and collaboration booster.
  • Copilot+ PCs have neural processor units (NPUs) capable of more than 40 trillion operations per second. In other words, a lot of inferencing work can be done on the edge.
  • During his Build 2024 keynote, Microsoft CEO Satya Nadella said there are 40 models included into Copilot+ PCs including its new Phi-3 models, OpenAI's GPT-4o and other small language models as well as large ones. "We have 40 plus models available out of the box including Phi-3 Silica, our newest member of our small language family model designed to run locally on Copilot+ PCs to bring that lightning-fast local inference to the device. The copilot library also makes it easy for you to incorporate RAG (retrieval-augmented generation) inside of your applications on the on-device data,” said Nadella.
  • Features like Recall, which makes your PC a recorder of all of the information, relationships and associations. Recall can help you remember items to create individual experiences. Cocreator and Live Captions are additional AI-driven features.

Add it up and CIOs are going to start testing these Copilot+ PCs and pondering their refresh cycles. Perhaps there's a future of work or productivity play to be had. The reality is that CIOs probably have more business transformation projects with higher priority rankings.

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly and is brought to you by Hitachi Vantara.

But what's surprising is how the big picture was glossed over with the Copilot+ PC launch. If these AI PCs scale, you're going to have a lot of computing power at the edge that can be used for inferencing and potentially even training.

Think peer-to-peer networked model inference. Smartphones are increasingly building in the capacity for AI models. Now you add in PCs. Generative AI can be decentralized from the AI factory vision that's popular today. Generative AI requires costly computing infrastructure. Why wouldn't you offload some of that work to the nodes on the edge?

For businesses, this emerging distributed AI computing system could mean using the PC to run models for personal identifiable information (think mortgage applications) and build personalized apps on the fly. You could automate any process that touches a customer using that person's compute power.

Are we there yet? Not really since these AI PCs just launched and it's doubtful any of these features are going to cause a buying frenzy. But PCs are already past their normal shelf life so the refresh cycle will drive some demand.

And the economic incentives are there to figure out a more peer-to-peer computing approach. At some point, enterprises are going to grow weary of paying up for AI compute. Finding a way to leverage AI PCs in the field could be a salve. There's a reason Dell Technologies considered AI PCs as part of its AI factory vision.

More on genAI:

Data to Decisions Future of Work Tech Optimization Innovation & Product-led Growth Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Everything you need to know about Qiskit 1.0

#QuantumComputing holds significant potential for transforming industries but requires robust and user-friendly #development tools. That's why IBM designed Qiskit 1.0, a comprehensive, open-source #software stack to streamline the journey into #quantum computing.

At IBM #Think2024, Constellation analyst Holger Mueller talks with Blake Johnson, Quantum Engine Lead at IBM Quantum about Qiskit 1.0's key advantages for quantum computing:

📌 Complete #software stack for streamlined development
📌 Enhanced performance and stability for complex computations.
📌 Intuitive, user-friendly design for constructing quantum #applications.
📌 Open-source accessibility and easy installation.

🔎 To learn more about IBM Qiskit 1.0, visit: https://lnkd.in/gcHEKZkf

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/ft_7I6KfOso?si=k_s1-2CECaSwGfp8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Quantum Use Cases 2024 & Beyond

With all the hype around #AI advancements, it's easy to overlook the opportunity in #quantum computing. But quantum #technology addresses complex computational problems far beyond the reach of classic computing, and its use potential is only beginning to be realized.

At IBM #Think2024, Constellation analyst Holger Mueller talks with Heather Higgins, Partner of Industry and Technical Services at IBM Quantum about...

💡 Why quantum is important for the #enterprise
💡 Relevant #customer use cases
💡 How #quantumcomputing can fit into existing enterprise systems

Watch the full interview below ⬇

🔎 For more resources on quantum, access the IBM Quantum Decade Book: https://lnkd.in/d2Qz4Bgc

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/c_yFge6Eeq0?si=WeUBASpyAtqABduJ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Intuit sees 'green shoots' from its generative AI strategy

Intuit CEO Sasan Goodarzi said generative AI is increasing the company's total addressable market as TurboTax users flocked to its data, AI and virtual expert platform. Now the company has learned from its genAI efforts, Intuit will double down on what's working. 

"TurboTax live revenue is expected to be $1.4 billion representing approximately 30% of total consumer revenue growing at a significant scale. This gives us confidence that we can digitize a very manual, disaggregated and high price assisted category," said Goodarzi on Intuit's third quarter earnings call.

Under Goodarzi, Intuit has built out its data platform on AWS and then scaled into generative AI. He said genAI financial assistance played a big role in the TurboTax experience. Goodarzi added that 24 million customers used intuitive AI to explain refunds, answer questions and check for accuracy.

Customer story: Intuit’s Bet on Data, AI, AWS Pays Off Ahead of Generative AI Transformation

"GenAI is working at scale for both our customers and our AI powered experts. I'm excited about what we're working on for next season to accelerate innovation and deliver even more customer benefit," he said.

Now that Intuit has one tax season of generative AI enhancements in the books, it can revamp experiences and add new models as needed via Amazon Bedrock and other services. The current generative AI models are much more advanced than a year ago. 

Goodarzi said Intuit is looking to AI to reinvent customer experiences and drive revenue growth. He reiterated the big bets for genAI on TurboTax and QuickBooks.

"To increase our investments in the outline focus areas given the green shoots we're observing, we are taking a hard look at what we can stop doing and where we can reallocate investments to accelerate top line growth while remaining committed to delivering operating margin expansion in fiscal year 2025 and beyond," said Goodarzi.

Goodarzi's remarks were on Intuit's third quarter earnings call. The company reported third-quarter revenue of $6.7 billion, up 12% from a year ago, with earnings per share of $8.42 a share. Non-GAAP earnings were $9.88 a share.

As for the outlook for the fiscal year, Intuit expects TurboTax Live revenue to grow 17% to $1.4 billion and average revenue per return to increase 10%.

Overall, Intuit said its fiscal 2024 revenue will be between $16.16 billion to $16.2 billion, up 13%. Non-GAAP earnings will be $16.79 a share to $16.84 a share. Small business and self-employed group revenue will be up 18% and consumer revenue up 7% to 8%.

More on genAI:

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Workday reports strong Q1, says 25 generative AI use cases on roadmap

Workday reported a strong first quarter as the company leveraged AI within its platform and continued to become more efficient.

The company reported first quarter net income of 40 cents a share, non-GAAP earnings of $1.74 a share and revenue of $1.99 billion, up 18.1% from a year ago. Wall Street was expecting Workday to report earnings of $1.58 a share on revenue of $1.97 billion.

Workday said its 12-month subscription revenue backlog was $6.6 billion, up 17.9% from a year ago. Total subscription revenue backlog was $20.68 billion, up 24.2% from a year ago.

Carl Eschenbach, Workday's CEO, said customers are looking to manage through the "shifting talent landscape, and pressure to realize operational efficiencies."

According to Workday, the company has more than 50 AI use cases in production and 25 generative AI use cases on the roadmap. Workday also expanded a partnership with AWS and formed a partnership with Google Cloud.

Workday CFO Zane Rowe said the "we were pleased with our progress across key growth initiatives in Q1, which help build a foundation for long-term growth." He added: "Our updated subscription revenue guidance reflects the elevated sales scrutiny and lower customer headcount growth we experienced during the quarter."

As for the outlook, Workday projected fiscal 2025 subscription revenue of $7.7 billion to $7.725 billion, up 17%, with non-GAAP operating margins of 25%. For the second quarter, Workday projected subscription revenue of $1.895 billion, up 17%, with non-GAAP operating margins of 24.5%.

Insights:

On a conference call with analysts, Eschenbach said the first quarter is typically Workday's slowest. He said Workday outperformed in healthcare, public sector and financials. 

"When purchase decisions are being made, our win rates remain strong but within the quarter, we experience increased deal scrutiny as compared to prior quarters. And we are seeing customers committing to lower headcount levels on renewals compared to what we had expected," he said. "We expect these dynamics to persist in the near term, which is reflected in our revised FY 25 subscription revenue guidance. While we can't control the macro, we are focusing on what's in our control. And that is innovation, scaling our go to market engine and partner ecosystem and delivering customer value."

Workday executives also said they are often selling full platform plays and that sales cycle is longer. 

Related Research:

Eschenbach said more than 90% of the company's nearly 2,000 financial customers also have HCM. 

On AI, Eschenbach said:

"We've built AI into the core of our platform, which means AI features and functionality are embedded natively in all of our applications. And with more than 65 million users generating more than 800 billion transactions per year on our platform, the volume of clean, trusted data that workday and our ecosystem can leverage for AI is truly unmatched. We continue to make significant investments to further enhance our leadership in this area in deliver trusted and responsible AI innovations that drive meaningful business results. We now have more than 50 ai use cases live in production in 25 generative AI use cases on our roadmap."

These use cases are primarily content generation, job descriptions and knowledge base articles as well as insights for things like payroll and talent optimization.

Eschenbach said Workday is building out its partner ecosystem and go-to-market efforts. Workday is also building out its ecosystem via Extend, a marketplace for Workday platform applications. Workday's AI marketplace will go live next month with AI tools from the company and third parties. 

Constellation Research's take

Constellation Research analyst Holger Mueller said:

"Workday had a good first quarter, just missing the $2 billion overall revenue milestone by $10 million. And with that it grew itself back to a profit compared to last year's quarter. But margins are tight, with net income of just over $100 million. The good news is that Workday keeps innovating, with generative AI use cases coming soon, as well as the addition of Australia with native payroll. Now all eyes are on Q2. Can Workday keep growing, break the $2B in revenue and control costs?"

 

Future of Work Data to Decisions Tech Optimization workday AI Analytics Automation CX EX Employee Experience HCM Machine Learning ML SaaS PaaS Cloud Digital Transformation Enterprise Software Enterprise IT Leadership HR Chief Financial Officer Chief People Officer Chief Information Officer Chief Customer Officer Chief Human Resources Officer

Epicor Prism and AI Announcements | Live Interview from Epicor Insights 2024

Constellation Research R "Ray" Wang sits down with Arturo Buzzalino, VP of Products and Innovation at Epicor, to talk through AI announcements from Epicor Insights 2024 and their new product Epicor Prism.

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/LMUkqHg4Tv4?si=6zNTa2cW8tgasRwR" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

A creative response to Generative AI

With Generative AI being used to imitate celebrities and authors, the question arises, is your likeness a form of intellectual property (IP)? Can you copyright your face or your voice?

These questions are on the bleeding edge of IP law and may take years to resolve. But there may be a simpler way to legally protect appearances. On my reading of technology-neutral data protection law — widespread internationally and now rolling out across the USA — generating likenesses of people without their permission could be a privacy breach.

Let’s start with the generally accepted definition of personal data as any data that may reasonably be related to an identified or identifiable natural person.

Personal data (sometimes called personal information) is treated in much the same way by the California Privacy Rights Act (CPRA), Europe’s General Data Protection Regulation (GDPR), Australia’s Privacy Act, and the new draft American Privacy Rights Act (APRA).

These sorts of privacy laws place limits on how personal data is collected, used and disclosed. If personal data is collected without a good reason, or in excess of what’s reasonable for the purpose, or without the knowledge of the individual concerned, then privacy law may be breached.

Technology neutrality in privacy law means it does not matter how personal data is collected. In plain language, if personal data comes to be in a storage system, then it has been collected.  Collection may be done directly via forms, questionnaires and measurements, or indirectly by way of acquisitions, analytics and algorithms.

To help stakeholders deal with the rise of analytics and Big Data, the Australian privacy regulator has developed additional guidance about indirect collection of personal data:

“The concept of ‘collects’ applies broadly, and includes gathering, acquiring or obtaining personal information from any source and by any means. This includes collection by ‘creation’ which may occur when information is created with reference to, or generated from, other information” (emphasis added; ref: Guide to Data Analytics and the Australian Privacy Principles, Office of the Australian Information Commissioner, 2019).

How should privacy law treat facial images and voice recordings?

What are images and voice recordings? Simply, these are data (‘ones and zeros’) in a file which represent optical or acoustic samples that can be converted back to analog to be viewed or heard by people.

Now consider a piece of digital text. That too is a file of ones and zeros, this time representing coded characters, which can be converted by a printer to be human readable. If the words thus formed are identifiable as relating to a natural person, then the file constitutes personal data.

So if any data file can be rendered as an image or sound which is identifiable as relating to a natural person (that is, the output looks like someone) then the file is personal data about that person.

Under technology neutral privacy law, it doesn’t matter how the image or sound is created. If data generated by an algorithm is identifiable as relating to a natural person (for example, by resembling that person) then that data is personal data, which the Australian privacy commissioner would say has been collected by creation. The same sort of interpretation would be available under any similar technology-neutral data protection statute.

If a Generative AI model makes a likeness of a real-life individual Alice, then we can say the model has collected personal information about Alice.

I am not a lawyer but this seems to me to be easy enough to test in a ‘digital line up’. If a face or voice is presented to a sample of people, and an agreed percentage of them say it reminds them of Alice, then the evidence would suggest that personal data of Alice has been collected.

In any jurisdiction with technology-neutral privacy law, that might be a breach of Alice's rights.

Digital Safety, Privacy & Cybersecurity Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Security Zero Trust business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer Chief Privacy Officer

Nvidia Q1 shines, splits stock 10-for-1 amid data center boom

Nvidia reported first quarter sales growth of 262% from a year ago, reported record quarterly data center revenue and split its stock 10-for-1 effective June 7.

The company reported first quarter earnings of $5.98 a share on revenue of $26 billion. Non-GAAP earnings were $6.12 a share. Wall Street was expecting Nvidia to report first quarter earnings of $5.54 a share on revenue of $24.6 billion.

CEO Jensen Huang said the AI factory upgrade cycle has begun. He said:

"Our data center growth was fueled by strong and accelerating demand for generative AI training and inference on the Hopper platform. Beyond cloud service providers, generative AI has expanded to consumer internet companies, and enterprise, sovereign AI, automotive and healthcare customers, creating multiple multibillion-dollar vertical markets."

During the quarter, Nvidia lined up a bevy of partnerships including one with Dell Technologies that advances the AI factory concept. Nvidia CEO Jensen Huang said Dell Technologies AI factory effort will be the largest go-to-market partnership the GPU maker has. "We have to go modernize a trillion dollars of the world's data centers," said Huang.

As for the outlook, Nvidia projected second quarter revenue of $28 billion with non-GAAP gross margins of 75.5%, an outlook that indicates Nvidia has pricing power.

By the unit, data center revenue surged 23% from the first quarter and 427% from a year ago. Gaming and AI PC revenue in the first quarter was $2.6 billion, up 18% from a year ago. Professional visualization revenue in the first quarter was up 45% from a year ago and automotive and robotics sales were up 11%.

In prepared remarks, CFO Colette Kress said:

"Data Center compute revenue was $19.4 billion, up 478% from a year ago and up 29% sequentially. These increases reflect higher shipments of the NVIDIA Hopper GPU computing platform used for training and inferencing with large language models, recommendation engines, and generative AI applications. Networking revenue was $3.2 billion, up 242% from a year ago on strong growth of InfiniBand end-to-end solutions, and down 5% sequentially due to the timing of supply. Strong sequential Data Center growth was driven by all customer types, led by Enterprise and Consumer Internet companies. Large cloud providers continued to drive strong growth as they deploy and ramp NVIDIA AI infrastructure at scale, representing mid-40% of our Data Center revenue."

Constellation Research's take and conference call takeaways

Constellation Research analyst Holger Mueller said:

"Nvidia had another blow out quarter with surreal YoY comparisons. If you want to see the AI boom in a financial statement – look up the Nvidia earnings.  But all things come to an end – Nvidia is only guiding to <10% QoQ growth, which half of this quarter's QoQ growth. The question is what is slowing Nvida down – demand or supply? It could also be cloud vendors holding their CAPEX spend in anticipation of Blackwell. One thing is clear for Nvidia to be Nviida – it needs Blackwell to be a success."

Key items from the conference call include:

  • Inferencing is in the mid-40s as percent of data center revenue. 
  • Nvidia is speaking to total cost of ownership, which AMD is actively discussing. Kress said: "Training and inferencing AI on NVIDIA CUDA is driving meaningful acceleration in cloud rental revenue growth, delivering an immediate and strong return on cloud provider's investment. For every $1 spent on NVIDIA AI infrastructure, cloud providers have an opportunity to earn $5 in GPU instant hosting revenue over four years."
  • Those cloud returns also pay off for end customers. Kress said cloud rentals remain the lowest cost way to train models as well as inferencing workloads. 
  • Enterprises are showing strong demand with Meta and Tesla cited as key customers for Nvidia infrastructure. 
  • Sovereign AI is expected to drive revenue in the "high single-digit billions this year."
  • H200 was sampled in the first quarter with shipments in the second. Huang said production shipments will ramp in the third quarter with data centers stood up in the foruth quarter. OpenAI got the first system. "We will see a lot of Blackwell revenue this year," said Huang. 
  • Kress again cited cost benefits with Nvidia HGX H200 servers: "For example, using Llama 3 with 700 billion parameters, a single NVIDIA HGX H200 server can deliver 24,000 tokens per second, supporting more than 2,400 users at the same time. That means for every $1 spent on NVIDIA HGX H200 servers at current prices per token, an API provider serving Llama 3 tokens can generate $7 in revenue over four years."
  • Grace Hopper Superchip is shipping in volume. 
  • Blackwell systems are reverse compatible so the transition from H100 to H200 will be seamless. That said supply will remain an issue. Huang said: "We expect demand to outstrip supply for some time as we now transition to H200, as we transition to Blackwell. Everybody is anxious to get their infrastructure online. And the reason for that is because they're saving money and making money, and they would like to do that as soon as possible."
  • We're 5% into the AI data center buildout. Huang said: "We're in a one-year rhythm. And we want our customers to see our roadmap for as far as they like, but they're early in their build-out anyways and so they had to just keep on building. There's going to be a whole bunch of chips coming at them, and they just got to keep on building and just, if you will, performance average your way into it. So that's the smart thing to do. They need to make money today. They want to save money today. And time is really, really valuable to them."
  • Ethernet networking a growth market for Nvidia. Huang said: "For companies that want the ultimate performance, we have InfiniBand computing fabric. InfiniBand is a computing fabric, Ethernet is a network. And InfiniBand, over the years, started out as a computing fabric, became a better and better network. Ethernet is a network and with Spectrum-X, we're going to make it a much better computing fabric. And we're committed -- fully committed to all three links, NVLink computing fabric for single computing domain to InfiniBand computing fabric, to Ethernet networking computing fabric. And so we're going to take all three of them forward at a very fast clip." 
Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Snowflake Q1 mixed, acquires assets to evaluate, monitor LLMs

Snowflake reported a mixed first quarter and said it will acquire technology assets and employees from TruEra, an AI observability platform that can manage evaluate large language models (LLMs).

The company reported a net loss of $317 million, or 95 cents a share, on revenue of $828.7 billion. Non-GAAP earnings for the first quarter were 14 cents a share.

Wall Street was expecting Snowflake to report first quarter non-GAAP earnings of 17 cents a share on revenue of $785.9 million.

Snowflake’s big move this quarter was the launch of its Arctic large language model (LLM).

Ramaswamy said the company saw "strong customer interest" for its AI products. Product revenue was up 34% from a year ago and Ramaswamy said its core business was strong.

As for the outlook, Snowflake projected second quarter product revenue of $805 million to $810 million, up 26% to 27% from a year ago. Product revenue in the first quarter was $780.6 million, up 34% from a year ago.


Constellation Research's take and conference call takeaways

Constellation Research analyst Holger Mueller said:

"Snowflake had another very strong quarter. But growth came at a price and Snowflake went backwards when it comes to costs, deepening its loss. It became more expensive to run its offerings, as cost of revenue rose linear to overall revenue growth. Snowflake keeps managing on the best practice of startups before interest rate hikes, with its cost structure of last fiscal year leading to a nice profit with the revenue of this fiscal year. We will see if Snowflake can keep it up. Its competitor Databricks is the all encompassing lakehouse of the cloud vendors, the foundation of analytics, and the  foundation for all forms of AI. Investors will have to watch how much Snowflake can become the AI data platform of enterprises in the coming GenAI years – and the verdict is still out."

Key items from the Snowflake conference call include:

  • Ramaswamy acknowledged that Snowflake needs to become a data platform. He said: "We're still in the early innings of our plan to bring our world class data platform to customers around the globe. And in the first quarter alone, we saw some of our largest customers meaningfully increase their usage of our core offering. The combination of our incredibly strong data cloud, now powerfully boosted by AI, is the strength and story of Snowflake."
  • Snowflake said data sharing and collaboration with customers can drive growth. Ramaswamy said: "Nearly a third of our customers are sharing data products as of Q1 2025, up from 24% one year ago. Collaboration already serves as a vehicle for new customer acquisition. Through a strategic collaboration with Fiserv, Snowflake was chosen by more than 20 Fiserv financial institutions and merchant clients to enable secure direct access to their financial data and insights. We announced support for unstructured data over two years ago. Now about 40% of our customers are processing unstructured data on Snowflake. And we've added more than 1,000 customers in this category over the last six months."
  • Arctic model training was quick and cost effective. Ramaswamy said: "I'm comfortable with the amount of investments that we are making. Part of what we gain as Snowflake is the ability to fast follow on a number of fronts, is the ability to optimize against metrics that we care about, not producing like the latest, greatest, biggest model, let's say, for image generation. And so having that kind of focus lets us operate on a relatively modest budget pretty efficiently."
Data to Decisions snowflake Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Cognizant CEO Kumar: GenAI hyperproductivity will need to be self-funded

Cognizant CEO Ravi Kumar said the technology services firm's customers are preparing for generative AI, but need to do work in quantifying productivity gains to justify costs. Nevertheless, Kumar said generative AI will usher in a hyperproductivity phase that will affect every worker, every process and every enterprise.

Speaking at Cognizant's Analyst Day in New York, Kumar (right) covered how his company is planning for the future, scaling AI and solving long-running industry issues such as technical debt.

Kumar's big take is that generative AI will lower the cost of technology deployment and cut technology debt. "AI will drive a new productivity wave and the biggest industry to embrace AI will be technology. We will have to apply AI on ourselves and as we apply it the cost of technology deployment will change ," said Kumar. "Discretionary spending in tech over the last 25 years has happened in the low interest rate regime where there was no cost of capital. Today you need a business use case for new projects."

In other words, generative AI needs to fund itself if it's really going to transform businesses.

That theme set the agenda for Cognizant's Analyst Day. Constellation Research analyst Doug Henschen said:

"Kumar, CEO of Cognizant since January 2023, took the reins of the company at a challenging time for the systems integration market. As he acknowledged during his opening keynote, deals are mostly focused on cost take out, vendor consolidation, and cost optimization these days rather than transformation or big bets on innovation. With that said, Kumar was optimistic about a next wave of growth triggered by hyper-productivity projects driven by AI. Kumar and Cognizant's CTO, Babak Hodjat, pointed to a next wave focused on carefully curated, multi-agent AI that will be focused, robust and credible. The focus is spot on, as GenAI and LLMs alone will not deliver what Kumar described as "responsible AI" that meets the acid test of addressing safety, trust, and equity simultaneously."

Constellation ShortListâ„¢ Public Cloud Transformation Services: Global | Constellation ShortListâ„¢ Custom Software Development Services | Constellation ShortListâ„¢ Customer Experience (CX) Operations Services: Global

The push and pull of generative AI adoption

Kumar spoke of a push and pull for generative AI demand that may take years to play out. Companies are going to have to step back with a wider perspective and do the productivity studies and think through the orchestration behind generative AI.

When the cost of deployment changes you'll have a new productivity paradigm, said Kumar. "Take that technology cost out and there's an extraordinary opportunity to create hyperproductivity," he said.

"If productivity is driving generative AI, you need the studies to quantify the big use cases," said Kumar. He noted that Cognizant has funded studies and using the framework internally. "We have an AI roadmap internally and have 200 plus use cases we have identified. We have a unique opportunity to be the reference stack," said Kumar. "You have to build the last mile. The last mile is the heavy lift."

Cognizant is using its own platforms such as Neuro AI, Skygrade and Flowsource and using a "tech for tech" approach for development. The idea is to use Cognizant's platforms to drive innovation that's self-funded with productivity savings.

Kumar added that Cognizant has more than 500 prototypes in the pipeline for generative AI as well as another 500 that have graduated to production. "To have the license to talk to a client you have to apply generative AI on yourself. Our biggest use case for AI is deploying it in the technology development cycle," said Kumar. "We will use AI to disrupt the operating model for the company."

So, what's the holdup for this generative AI boom?

Kumar said enterprises are reluctant to go all in without quantifiable returns. He said:

"Why is it that companies are not jumping the gun on this? The business case doesn't stack up in most places. Let's take the example of copilots. Every company has done small cohorts of copilots but companies are not going all out because the productivity studies are not complete and comprehensive. Productivity will be the first lever and it is related to business transformation and process transformation."

To help break this log jam, Kumar said Cognizant is mapping enterprise operations, tasks and roles for enterprises and itself. "We are client zero on this," he said.

Data to Decisions Future of Work Tech Optimization Innovation & Product-led Growth Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity cognizant AI ML Machine Learning LLMs Agentic AI Generative AI Analytics Automation B2B B2C CX EX Employee Experience HR HCM business Marketing SaaS PaaS IaaS Supply Chain Growth Cloud Digital Transformation Disruptive Technology eCommerce Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Customer Service Content Management Collaboration M&A Enterprise Service GenerativeAI Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer Chief AI Officer Chief Product Officer