Results

Is AI data center buildout a case of irrational exuberance?

The cost of artificial intelligence inference and training will fall and enterprises need to question the current groupthink that revolves around a never-ending data center buildout cycle.

Speaking on a panel at Constellation Research's Connected Enterprise conference, Brian Behlendorf, CTO at the Open Wallet Foundation and Chief AI Strategist at The Linux Foundation said "there's a lot of irrational exuberance about the amount of investment that's going to be required both to train models and do inference on them."

Behlendorf said the capacity buildout is going to lead to indigestion.

"I see a lot of enterprises that are cutting other programs, laying off staff and doing everything to conserve capital to be able to collect all the data in the world and build dumb models that they don't really know what they're going to do."

"Yet, the cost of training AI is going to come down dramatically. There are a raft of 10x improvements in training and inference costs, purely in software. We're also finding better structured data ends lead to higher quality models at smaller token sizes."

Behlendorf added that he expects commodity GPU hardware systems to emerge in the next few years. He noted that the idea that the industry is going to need nuclear reactors and an ongoing data center buildout cycle to train large language models is foolhardy.

"A more sober analysis is that you need to build capacity inside your organization at a personnel level and skills level on how to use these technologies and hold on the massive expansion of data centers," he said.

More from CCE 2024: 

The theme of the panel revolved around open-source models and their role in generative AI, but panelists agreed that costs will come down due open technologies. The upshot is that the Nvidia-OpenAI hammerlock on generative AI isn't going to last.

Other key points from the panel include:

Data hoarding doesn't work. Much of the genAI buildout revolves around the idea that data demand is insatiable. You can't have enough data is the common view. Jana Eggers, CEO of Nara Logics, disagreed:

"More data isn't going to solve your problem and the tech industry hasn't quite gotten it yet. Boards think that you should just go out and acquire more data."

Eggers said that enterprises need to profile the data they have and what's being acquired. Quality matters more than quantity. "Enterprises aren't even doing the basic checks on their own data or open data," said Eggers. "At the very start we tell our customers to profile their data."

It remains to be seen how long the view that data hoarding pays lasts.

Open models will lower costs, but hygiene will be an issue. Brittany Galli, CEO of BFG Ventures, said open models will improve efficiencies in AI, but hygiene will be a problem. "There's a ton of bad data and it's causing a lot of problems. You think that open models equal more transparency and higher efficiency, but the problem is hygiene," said Galli. "There is no perfect model that's going to be more accurate and unbiased. It's going to take time.

Invest smartly because you have to invest in AI. "I think we're to that point with AI that we know it's needed and you have to build or buy or get run over," said Galli. "There are no other options."

Be aware what's really open about models and frameworks. hlendorf said AI builders need to read the fine print. "We need to apply rigor to the use of the word open around AI," he said. Behlendorf said so-called open models often have a series of restrictions and lack transparency.

Models will become more efficient too and smaller. Jeff Welser, Vice President of IBM Research at the Almaden Lab, said smaller models and a wide selection of them will increase efficiency. "One reason you'll want open models is that you don't want to train them. You can choose to train for a specific portion or use case and then string them together," he said.

 

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

How leaders need to think about AI, genAI

Artificial intelligence and generative AI will tax leadership, but ultimately raise the bar for decision making.

Those were some of the takeaways from Cassie Kozyrkov, Founder Kozyr LLC, who delivered the keynote at Constellation Research's Connected Enterprise.

Kozyrkov is best known for founding the field of Decision Intelligence and serving as Google’s first Chief Decision Scientist, where she led the charge in Google’s transformation into an AI-first company. She's now an AI advisor for Gucci, NASA, Meta and others.

Here's a look at what leaders need to know about AI.

Every job is going to have some level of disruption due to AI. "Don't think in terms of jobs. Think in terms of what tasks of any given job are most likely to be disrupted," said Kozyrko. "The key thing to understand is that every job has some component that's a little bit repetitive."

Repetitive work is going to be automated. "The repetitive and digitized task is the ideal target for AI automation when there aren't a lot of consequences for messing up performance," she said.

But automating repetitive work can hollow out your bench. "Here's the thing that I think too many people forget. When you hire somebody with no experience at all the work you give them very early on repetitive and easy to check the answer. The perfect task for AI is also the perfect work for your intern or new graduate," said Kozyrko. "A lot more of the junior person's work is going to get cannibalized by more senior folks."

Kozyrko said:

"You should be preparing for what you're going to do with training your future cohort of leaders."

AI washing is trendy. "It is difficult to know what you're buying these days and what to expect--not only of the software systems but the complex collaboration between human and machine," she said.

Here's the test to see if there's AI washing or something like machine learning. "If it's written in Python, it's probably machine learning. If it's written in PowerPoint it's probably AI," said Kozyrko. "Ask questions."

Strategy matters. AI models are just recipes and a human engineer has to think hard about the problem, how to solve it and come up with instructions. "You need to understand the task to come up with those instructions," said Kozyrko. "When humans teach each other, sometimes we use exact instructions. Sometimes we do it another way and teach with examples. Data is just examples."

Leaders prefer examples when control matters. For more complex work, examples matter more. AI will force leaders to raise the bar on performance and be comfortable with change--there's no choice.

AI decisions belong to leaders not subject matter experts. "We have a problem with absentee leadership, where a lot of folks think that AI is the business of the PhD in the world. It is the business of the leader, the decision maker, the domain expert, not the person who's good at mathematics," said Kozyrko. "AI is the product of decision making with some very subjective decisions made by whoever was in charge. The worst thing you can do is think AI is some independent entity that's objective."

Generative AI makes things more complicated for leaders. "Generative AI has more than one right answer and more than one wrong answer," she said. "Test everything and test it in context. Trust nothing you haven't tested and use it carefully."

Kozyrko said:

"It is hard to set criteria for Gen AI and always think in terms of who takes responsibility. That may be more of a limiting factor than some of the technology. So, at the end, is genAI an act of desperation or the frontier of innovation? It's absolutely both."

AI is a genie that can be a friend or foe. "AI will absolutely do is raise the bar for your decision leadership. This is a genie that may grant you a wish, but we know the genie is dangerous," said Kozyrko. "AI will demand more from us in the future. It is absolutely a leadership concern."

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AMD's Q3 on target, data center unit revenue growth 122% from a year ago

AMD continues to see its revenue surge due to its data center unit, which posted sales growth of 122% from a year ago.

The company reported third quarter net income of $771 million, or 47 cents a share, on revenue of $6.8 billion. Non-GAAP earnings were 92 cents a share.

AMD CEO Dr. Lisa Su said: "record revenue was led by higher sales of EPYC and Instinct data center products and robust demand for our Ryzen PC processors."

  • Data center revenue was $3.5 billion due to AMD Instinct GPU shipments as well as AMD EPYC CPUs.
  • PC revenue was $1.9 billion, up 29% from a year ago.
  • Embedded unit revenue was $927 million, down 25% from  a year ago, and gaming revenue fell 69% from a year ago to $462 million.

As for the outlook, AMD projected fourth quarter revenue of $7.5 billion, give or take $300 million.

On a conference call, Su said:

  • "Data Center GPU revenue ramped as MI300X adoption expanded with cloud, OEM and AI customers. Microsoft and Meta expanded their use of MI 300X accelerators to power their internal workloads in the quarter. Microsoft is now using MI 300X broadly for multiple co-pilot services powered by the family of GPT 4 models."
  • "Development on our MI400 series based on the CDNA Next architecture is also progressing very well towards a 2026 launch. We have built significant momentum across our data center AI business with deployments increasing across an expanding set of Cloud, Enterprise and AI customers. As a result, we now expect Data Center GPU revenue to exceed $5 billion in 2024, up from $4.5 billion we guided in July and our expectation of $2 billion when we started the year."
  • "In the Data Center alone, we expect the AI accelerator TAM will grow at more than 60% annually to $500 billion in 2028. To put that in context, this is roughly equivalent to annual sales for the entire semiconductor industry in 2023."
  • "We feel very good about the market from everything that we see, talking to customers, there's significant investment in trying to build out the infrastructure required across all of the AI workloads." 
Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AMD Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Google Cloud Q3 revenue up 35% from a year ago, Alphabet results shine

Alphabet handily topped third-quarter expectations and its Google Cloud saw revenue growth of 35% from a year ago due to generative AI.

The company, which includes Google, Google Cloud and YouTube, reported third quarter net income of $26.3 billion, or $2.12 a share, on revenue of $88.27 billion, up 15% from a year ago.

Wall Street was expecting Alphabet to report third quarter earnings of $1.85 a share on revenue of $86.3 billion.

Google Cloud revenue was $11.4 billion, up 35% from a year ago. Alphabet said Google Cloud saw strength across AI infrastructure, genAI and core services.

CEO Sundar Pichai said the company's "long-term focus and investment in AI" are paying off. Specifically, Google Cloud is driving "deeper product adoption" with existing companies while landing larger deals and new enterprises. How Google Cloud is monetizing AI

Indeed, Google Cloud's operating income is accelerating. The company's third quarter operating income was $1.947 billion, up from $266 million a year ago.

Google's core services remain the cash cow with operating income of $30.86 billion.

Google Cloud has been picking up traction as it builds out services for the AI layer and also drills down into industries. 

And more on industries. 

 

Speaking on Alphabet's earnings conference call, CEO Sundar Pichai said Google Cloud's stack is paying off as enterprises leverage AI. 

Pichai said Alphabet continues to invest in AI infrastructure--including nuclear power for data centers. Google Cloud is also benefiting from workloads powered by new Nvidia GPUs as well as its own processors for AI workloads. CapEx in the fourth quarter will be similar to the third quarter tally of $13 billion. The largest component of that CapEx was servers, data centers and networking equipment. 

Takeaways from the conference call:

  • Google services are all leveraging Gemini models.
  • The company has unified teams in AI and machine learnin to move faster. He said the team behind NotebookLM highlights how smaller teams can move faster. "You'll see a rapid pace of innovation," he said.
  • Google is using AI to generate code that is then reviewed by engineers. 
  • Google is seeing search queries surge due to AI overviews and strong engagement as consumers ask longer and more nuanced questions. 
  • Circle to Search is available on more than 150 million Android devices. 
  • Google Cloud's ability to attract generative AI workloads is landing the company larger deals. 
  • Gemini API calls have grown 14x in the last six months. 

On a conference call with analysts, Pichai said the following:

  • "Customers use our AI platform together with our data platform, BigQuery, because we analyze multimodal data no matter where it is stored with ultra-low latency access to Gemini."
  • "Each week, Waymo is driving more than 1 million fully autonomous miles and serves over 150,000 paid rides. The first time any AV company has reached this kind of mainstream use."
  • "On the TPU front, I just spent some time with the teams on the road map ahead. I couldn't be more excited at the forward-looking road map, but all of it allows us to both plan ahead in the future and really drive an optimized architecture for it."
  • "We are in much more of a virtuous cycle with a lot of velocity in the underlying models. We've had two generations of Gemini model. We are working on the third generation, which is progressing well. And teams internally are now set up much better to consume the underlying model innovation and translate that into innovation within their products. There is aggressive road map ahead for 2025." 
Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Takeaways on successful AI, generative AI projects

Enterprise artificial intelligence projects--generative AI, agentic and everything in between--will often depend on old-school IT management techniques.

Here's a look at some of the takeaways from successful AI projects from a panel at Constellation Research's Connected Enterprise.

It's all about value and use cases. Laurie Wheeler, Chief Operating Officer, Information Services & Technology at MultiCare Health System, said AI projects need to be very clear about use cases and the value expected.

Find a champion. Wheeler, a BT150 inductee, said the projects have done well have featured a "great partnerships with operations." "Having a physician champion was critical to success," said Wheeler.

Don't be wedded to a technology or ideology. Chris Claridge, Chair of Trust Alliance NZ, said stakeholders need to be future oriented but leave technology ideology at the door. "You want people who are open enough about digital identity, fabric ontologies and choose vendors that allow interoperability," said Claridge.

Expectations. Patrick Nicolet, Chairman of Linebreak, said expectations need to be set with AI projects and there's a balance between what can be done and the art of the possible. Wheeler added that you can set expectations to hit a particular metric, but be prepared to be surprised at times.

Standards and quality matters. Wheeler added that champions also help with creating standards and then upholding them as a project scales.

Be prepared to manage fear. Claridge said AI projects often have a tangible fear about job loss attached to them. "That fear causes enormous concerns and fear," he said. "Managing the disruption of this technology is going to be a major issue. It is incredibly disruptive once you start to deploy and scale."

Cultural change. "What's different about AI is that we've been trained to bring answers to questions," said Nicolet. "AI is the opposite. We have to be good at asking the right question and have lots of answers. You really have to shift and that's challenging for any organization."

Claridge agreed and noted that organizations' purpose will be challenged by AI. "Organizations will have to challenge why they exist. AI is going to change the way data moves around and the actual activities of the organization," he said.

Change management. All the panelists said that change management is the secret sauce to AI projects. The communication has to be digestible to end users and people need to be reassured about their jobs. Keep in mind, however, that some of those fears are warranted.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

2025 in preview: What Constellation Research’s analysts say

In 2025, you'll have to get ready for "knowledge," AI governance will move to the forefront, enterprise software models will be revamped, decision automation will depend on humans in the loop and data strategies will be a pain point.

At Constellation Research's Connected Enterprise conference in Half Moon Bay, the first panel revolved around the first cut of 2025 predictions.

Here's the recap.

Martin Schneider:

  • Growth strategies and customer journeys will evolve into orchestrated engagements designed for an entire lifecycle.
  • Revenue plans will be modeled from the ground up by AI.
  • AI-generated workflows will bring new approaches to customer data that will drive repeatable and scalable predictions.

Chirag Mehta:

  • Cybersecurity implementations will move toward focusing more on response than prevention.
  • Generative AI will change the way software is built to make it more secure.
  • Chief product officers will have all the tools needed to exploit product development for growth. CPOs will no longer be chief backlog officers.

Me:

  • The enterprise software model will change and alter the way CIOs buy applications. The problem is that revenue models are in flux. Value based models, consumption models and traditional seat models will all be under fire.
  • Agentic AI orchestration and processes will be critical and a main focus for enterprises in the year ahead.

Andy Thurai:

  • AI projects will get real budgets and be under more scrutiny for returns on use cases.
  • AI governance will be critical as enterprises grow concerned about synthetic data.
  • Here's the problem: AI produces AI that is monitored by AI (see the conflict of interest here?).
  • 2025 will bring more data, more uses cases and more issues. The lawyers will be busy.

Doug Henschen:

  • Enterprises will realize that they have real data problems to solve before implementing genAI.
  • "Seventy percent of enterprises in our AI survey aren't seeing the ROI. What these companies have in common is a lack of data, not enough scale and not enough cleanliness," said Henschen.
  • There will be a barrage of vendor announcements looking to solve these data issues to make data usable for AI.

Liz Miller:

  • 2025 will be the year where enterprises are actually doing what they should be with AI. That means guardrails and a focus on processes. "What we've learned about genAI is that when you automate a really old process with AI you really get a really old result," said Miller.
  • Enterprise buyers will hear a lot about "knowledge" in 2025, but CxOs shouldn't treat the topic as just another buzzword. Knowledge is about all the accumulated data across the enterprise that drives experiences.

Holger Mueller:

  • Human capital management will see significant changes. On the people front, employees and gig workers can be doing the same thing. Payroll will become a key sector for innovation. And applications will become more intuitive.
  • AI will become turbocharged by transactional data.
  • 2025 will be the year of quantum computing (again).

Ray Wang:

  • Automation is transforming markets and the field will revolve around decisions, not more AI.
  • "We're going to move the conversation out of AI into agents that can make decisions," said Wang.
  • Budgets will focus on exponential efficiency due to cost pressures.
  • Automation success will depend on where you put humans in the process loop. "The number one question for automation is where do you insert the humans," said Wang.
  • Automation will become less of a concern due to demographics. Most countries won't have enough working people to do the work so automation is necessary.
Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

How Google Public Sector and NASA aim to bring generative AI to aircraft ground traffic control

Google Public Sector and NASA are training AI models to understand the speech, context and instructions needed to get airplanes from the runway to their gates as efficiently and safely as possible.

Today's airport surface management process revolves around quirky acronyms, aviation vocabulary and human voice traffic that's far from perfect. If NASA Aeronautics Research Institute’s (NARI) partnership with Google Public Sector pans out, the airport surface management process can be augmented with data, speech-to-text instructions and automation.

NASA Aeronautics Research Institute (NARI) is focused on cutting-edge aeronautics research and operational strategies. NARI connects industry, government, and academia to NASA with a focus on autonomous, high-speed, and electric aircraft. "We're the bridge between NASA researchers in aeronautics and the external community, which can be the FAA, other agencies, universities and industry," said Dr. Krishna Kalyanam, Deputy Director, NARI. "NARI was also set up to seed foundational early-stage research that may pan out and turn into a larger project funded by the government."

NARI's priorities include Advanced Air Mobility (AAM), Wildland Fire Initiatives, Shaping Tomorrow's Aviation Systems and providing a collaborative infrastructure for partners to work with NASA.

We caught up with Dr. Kalyanam (right) at the Google Public Sector Summit in Washington, DC to talk about the research project.

The project. Dr. Kalyanam said the goal of the research project is to leverage speech to text models in ground traffic control when planes land and taxi on the tarmac. If the process of getting plans through the airport surface to the gate can be optimized, airlines can improve safety and the cost structure. The research looked into whether voice content over the radio can be turned into taxi instruction with 100% accuracy so it can be absorbed by automation and provide another layer of instructions for pilots.

"You land and you have to get off the concrete. You don't want aircraft on the runway and the sooner you can get an aircraft to its destination, the more planes you can get on the runway," said Dr. Kalyanam. "As soon as you land, you're getting instructions to the gate assigned to you by your dispatcher. All instructions are provided by the ground controller. It's 'take this route. Turn here. And here.'"

Challenges. The biggest challenge with the project, according to Dr. Kalyanam, was that voice traffic between pilots and the control tower has its own vocabulary as well as poor radio quality.

"Say you're running into some bad weather and need instructions to the gate. Today that's full end-to-end speech. The information could be augmented by text, visual and other inputs to go along with voice that can be converted to a route that's communicated digitally," said Dr. Kalyanam. "Once digital it can be displayed on a map or directly ingested into route planning."

Another challenge is that instructions to pilots have a unique vocabulary including terms like Roger and Wilco that humans can easily fill in gaps when interpreting data. Models need to be trained on voice traffic over air to pick up this vocabulary.

More from Google Public Sector Summit:

The goal. By digitizing the voice traffic over radio, directions can be given via moving maps, text, and color codes. That data can also be used to optimize routes and improve efficiency. "Once you digitize the information you have all this information in one place that can be optimized," said Dr. Kalyanam. "There are 100 tasks that are needed between the time the plane lands, people get off and the plane is ready to take off again."

Dr. Kalyanam said this research could also apply to autonomous aircraft and refueling. "The traditional processes are mostly human-centric," he said. "Some of these things can be automated, but at the least you can make it easier for humans to perform tasks.”

He added that the motivation of the research is to provide a secondary source of information for the pilots.

Training models. Google Public Sector used multiple models for training, but training was done on a minimum data set of 10 hours of voice instructions. Google's base models were already trained on general English conversations but had to be customized by vocabulary and use case. The models would pick up voice instructions, transcribe them according to ground control's acronyms and vocabulary and create digitized instructions.

"It's almost like learning a new language," said Dr. Kalyanam. "There are words that we will never use in English because they mean completely different things. You need to get the right context. If you hear “Dealt” it most probably means “Delta” with the ‘-ah’ sound clipped. Sometimes you can’t hear parts of what is being said. You're training models to be as perfect as they can be in an imperfect environment."

Google Public Sector and NASA worked with retired controllers to verify the ground truth as well as the voice commands and how the models performed. "The goal was to capture the taxi instruction with 100% accuracy," said Dr. Kalyanam. "There may be a conversation, but you want to be able to know that the pilot can't turn left from Charlie to Lima and then use that information. There's local knowledge about the airport layout that can be used to fix errors.”

Complicating the training effort is the reality that every airport is different, and the models will need to know local context—say the differences between the airport in Dallas vs. Tampa. It’s possible that models will need more fine tuning based on location of the airport. This fine-tuning will be even more important if this research is applied to international airports.

Working with Google Public Sector. Dr. Kalyanam said partnering with Google Public Sector made sense given Google's experience in AI, speech-to-text use cases and mapping. "We had some internal stuff we developed over the years, but the speech-to-text expertise was with Google," said Dr. Kalyanam. "Google has the models and has done the research."

Dr. Kalyanam added that Google Public Sector also had the engineers available to test multiple models and configurations for the audio. "Not one single model works best," said Dr. Kalyanam. "It took a lot of experimentation. This is custom engineering work. It's a good partnership since we don't have access to what's inside the box, but we can provide feedback so Google can build something. We also have retired controllers and pilots for model validation."

Metrics. Although it's early in the research process, Dr. Kalyanam said time saved and reduced mishaps will be core metrics. "If you end up in the wrong place it's a lot of time wasted because aircraft normally do not go in reverse," he said. "There's a lot of opportunity with digitized data. If you didn't make a turn, automation can alert you and give you new instructions. I think this process can be made simpler and hopefully less prone to error."

What's next? NASA and Google Public Sector are looking to publish their research and work with the FAA and the aviation industry. "There's a lot of interest in this research," said Dr. Kalyanam. "This is exploratory research, so we are ready to accept some failures. We are trying to prove this concept and maybe we'll simulate it in one airport and see how it adapts. We do the research, crunch the numbers and work with the FAA and industry to mature the technology for use."

Data to Decisions Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing finance Healthcare Customer Service Content Management Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

BT150 Spotlight: Sunitha Ray on the difference between enterprise AI and genAI


Sunitha Ray, Field Operations CTO at Shopify, says there's a big difference between enterprise AI and generative AI and business leaders need to know the use cases and potential returns on investment for each category.

Ray, a Constellation Research BT150 member, was VP of IT at Shark Ninja and has a unique view of AI since she has been on both the sell side and buy side of enterprise spending.

In our chat, we covered the difference between generative AI and enterprise AI, how to think about returns and the need for reskilling.

Here are the takeaways from my chat with Ray.

Differentiating between enterprise AI and generative AI. Before taking on the role at Shopify, Ray was the VP of IT at Shark Ninja and led the artificial intelligence team and genAI projects.

"I differentiate between genAI and enterprise AI at this point. Enterprise AI is about optimizations and figuring out solutions to problems," explained Ray. "We did a project where we designed the supply chain network, plotted optimal manufacturing plant and distribution center locations based on customer service levels we wanted to meet. That's enterprise AI."

GenAI is more about getting access to all types of data and then creating something new, said Ray. "While genAI has a lot of use cases, but for corporate use cases it has a long way to go for ROI," said Ray. "Enterprise AI can still be leveraged more effectively."

Where generative AI works well. Ray said genAI has great use cases, but they tend to be in marketing and personalization. Images and text can be generated on the fly to personalize goods and offers for consumers. At Shopify, the company is leveraging genAI to give merchants imagery, product catalog and personalization options inside the platform.

Overspending? "We are big believers in genAI, but I feel like the amount being invested may be disproportionate to the returns that companies will see in the next year or two," said Ray.

Ray said that unless enterprises see clear returns from generative AI, they will pull back on investments. "There will be a huge wave of benefits coming in, but if companies are not seeing returns early, they may pull back on funding," said Ray. "I don't want to be negative, but there has been a lot of investment already and ultimately the C-suite will be looking at the bottom line."

Enterprises should also be honest about their AI readiness. One big reason genAI projects have stumbled is data strategy, said Ray, who noted investing in data strategy first will ensure better AI results.

The difference in leadership on the vendor side vs. the buy side. Ray said she's excited to be on the sell side with Shopify, which is a leading platform with a lot of AI.

Ray said:

"The big difference between the buy side and sell side is buy side is always about managing constraints and managing resources. Sometimes you may not always make the best decisions. You might compromise because you don't have the budget, people on board and the right resources."

"On the sell side, you don't have those constraints because companies are always trying to make their product superior and provide better total cost of ownership to customers."

DIY vs. buy decisions. Ray said DIY is predominate in AI projects today because consulting companies are still building out practices and enterprises are also honing skills. "When everything is changing so rapidly, companies are scrambling to reskill and start frameworks to generate use cases, have workshops and implement," said Ray.

Bridging genAI skill gaps to improve genAI projects. "What I would do differently if starting off with genAI today is to have a readiness workshop instead of jumping in," said Ray. "Are we ready as an organization to invest and create value from AI? Most companies would probably say no, but that doesn't mean you don't start. I would have parallel tracks for data strategy and AI."

Ray said enterprises should also start with baselines to track progress and then prioritize use cases. "One of my favorite ways of prioritization is the effort vs. impact metrics. How much effort do you put in and how much impact can you get with minimal effort? Take those use cases to senior management," she explained.

Final word. "I'm very excited about generative AI. I just want to make sure companies have the necessary guardrails to make sure projects don't fail. I see AI being a total game changer for most organizations," she said.

Ray added that enterprises should also lean into employee reskilling over the next two to three years. "The transformation is going to happen in the next two to three years and it's going to be exciting to see how it changes corporate structure and industries overall," she said.

New C-Suite Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Leadership AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Experience Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

How ResMed’s data prowess sets it up for AI, sleep health market expansion

ResMed is best known for its CPAP devices, medical devices and masks, but it's also software company with a treasure trove of sleep and breathing data. The plan for ResMed: Leverage artificial intelligence, generative AI and machine learning to grow its market.

The company’s digital health data set includes:

  • 28 million patients in ResMed's AirView software ecosystem.
  • 26 million medical devices with 100% cloud connectivity.
  • 20 billion nights of sleep health and breathing health data in the cloud across 140 countries.
  • 150 million accounts in ResMed's residential care software ecosystem.
  • 8.3 million patients with ResMed's myAir patient app.

And a newly launched generative AI digital concierge called Dawn will provide more interactions. ResMed is an example of how enterprises are increasingly using proprietary data to create unique AI applications and new products and services.

Here is the flywheel that ResMed is trying to leverage.

The bet is that ResMed has a massive untapped market that includes 2.3 billion people with sleep and breathing disorders.

On its investor day, ResMed CEO Michael Farrell said:

"The overlap between a person who suffocates every night with sleep apnea and who also has a psychological reason that they cannot sleep. It is incredibly difficult to treat insomnia. If you suffocate as well, it becomes a double wheeled problem. We don't know how many of our patients are not adherent to CPAP because they have insomnia. But that overlap is significant. And ResMed is investing in digital health on both sides of that.

The other overlap there is what's called overlap syndrome, which is chronic obstructive pulmonary disease and sleep apnea. You have difficulty breathing because of the geometry of the upper airway. But then in addition to that, you have lung disease. These are some of the most difficult patients to treat, and ResMed has the technologies, the bilevels, the ventilators, but also the digital health technology that can help physicians take care of these patients."

ResMed plans to expand into adjacent markets including insomnia, chronic obstructive pulmonary disease or COPD, neuromuscular disease and other chronic conditions. And ResMed is also working to make its core medical devices smaller and more comfortable.

Data driven

The idea that ResMed could expand its market is quite a turnabout considering many Wall Street analysts thought the company’s total addressable market would be shrinking until recently.

ResMed’s approach to data has helped it navigate a volatile 18 months as investors were concerned about how GLP-1 drugs used to treat obesity would impact ResMed sales of its medical equipment. The thinking behind the stock volatility was that lower obesity would hamper sales.

RedMed's approach to the GLP-1 threat was to analyze the data to continually test whether investor fears were warranted. Farrell said on the company's first quarter earnings call that the data so far is that patients on GLP-1 are more likely to start sleep apnea therapy and wear their CPAP (continuous positive airway pressure) devices.

"We've designed a real-world data analysis that now equals 989,000 subjects, who received both a prescription for a GLP-1 medication and a prescription for positive airway pressure therapy," said Farrell. "The results from this analysis are clear. People prescribed a GLP-1 and PAP therapy have 10.8 percentage points more likelihood or propensity to commence positive airway pressure therapy."

GLP-1 prescription and PAP prescription patients are also more likely to adhere to long-term therapy based on ResMed's analysis of ReSupply data.

Indeed, ResMed's first quarter revenue was up 11% and the company saw strong demand for its medical devices, mask, accessories and residential care software. A program called ReSupply keeps patient supply sales flowing.

The data flywheel

ResMed has a vast amount of sleeping and breathing data on patients, but the company is also getting an assist from consumer wearable devices, which are increasingly flagging sleep apnea issues.

Farrell said Samsung's latest Galaxy Watch and Apple's new Apple Watches are detecting sleep issues. Google's Fitbit and Garmin are also tracking sleep health. "We believe that these technologies will help drive more patients to seek out information regarding their sleep health and breathing health," said Farrell. "ResMed's obligation is to help these sleep health and breathing health consumers find their own pathway to appropriate diagnosis and treatment for sleep apnea."

The consumer wearable market is likely to drive the funnel for ResMed in the future. ResMed executives said the company plans to integrate with Apple HealthKit and other platforms and pursue strategic partnerships.

ResMed's data lake is one of the "deepest and most profound location of medical data on the planet," according to Farrell. That common data platform will continue to be an asset that unlocks value with de-identified data.

"What have we done with that? We've lowered costs. We lowered the cost of setting a patient up on positive airway pressure by 50% through the digital pathways. We've increased adherence, up to 87% from patients who are using myAir app on top of the doctor using AirView and the full connectivity," said Farrell. "What real-world data is going to come forward over the next five years, what are we going to do with the exponential technology that is generative AI and how are we going to take it to the next level?"

Farrell said patients will also create their own personalized data sets as they combine sleep health data with cardiovascular, diabetes and other data and then work better with health systems. "I think the outcomes will be there," said Farrell. "We see the person in the center. This is patient centric."

AI plans

ResMed is already seeing early returns from its Dawn generative AI assistant. After a few months, Farrell said about 25% of visitors have initiated a session with Dawn.

These sessions have reduced the volume of direct-to-live human contact center queries by 40%.

That example highlights half of ResMed's two-pronged AI strategy. AI will drive productivity in the company. Internally, ResMed is using AI to automate operations and processes to health providers, insurers and in the supply chain, said Bobby Ghoshal, Chief Commercial Officer, SaaS at ResMed.

"Our plan is to further infuse automation and AI across this entire process and specifically, to reduce friction and the patient intake side around documentation, authorization and billing," said Ghoshal.

ResMed is also betting that AI revenue through new products and services as well as personalized experiences.

Hemanth Reddy, ResMed's Chief Strategy Officer, said the company's 2040 strategy is to expand its market and use its data assets to harness the latest advancements in AI.

"We're going to connect our solutions much more deeply as one single integrated health technology ecosystem across an individual's patient journey. In doing so, we're going to drive much more personalized and digital-enabled pathways," said Reddy.

ResMed's plan is to benchmark itself against successful technology companies in terms of product management and speed.

However, ResMed knows its core strengths and where AI fits in. "ResMed is not going to be the world's best at AI. That's going to be Amazon and Microsoft and Google. But we are going to be the world's best at applying generative AI to the world's biggest data lake. I actually call it a data well of sleep health and breathing health information on the planet," said Farrell.

The ResMed stack

ResMed primarily uses Amazon Web Services for its data, AI and machine learning backbone. ResMed built its Intelligence Health Signals (IHS) platform on AWS so its data science team would build and deploy models.

In a 2022 case study, ResMed detailed its use of Amazon SageMaker for its artificial intelligence and machine learning platform. The company's data lake is also built on AWS and connects to SageMaker via AWS Glue.

Here's a look at RedMed architecture circa 2022.

Based on job listings, Snowflake is a key vendor for ResMed. The company also leverages open-source technologies as well as Terraform from HashiCorp, now owned by IBM.

More:

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity AR AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Agentic AI without process optimization, orchestration will flop

Agentic AI is a hot topic in enterprise technology, but without process automation and orchestration the vision is unlikely to be realized. CxOs sifting through the marketing hype of agentic AI should keep process optimization and orchestration in the forefront of planning.

In recent weeks, the flow of agentic AI news hit a fever pitch and it appears that vendors have gone from launching AI agents in their platforms to catapulting to "agent of agents." Many of these plans are short on process automation and orchestration. At this moment, agentic AI is a game of executing tasks autonomously within a vendor's platform. First, you got the data silos. Then you got the dime-a-dozen copilots within your applications. And now you're getting AI agents that aren't going to operate across platforms and processes.

Here's what you'll need to make agentic AI work.

  • A vendor that is more of a neutral party and has connectors into multiple systems. Think UiPath and Celonis potentially Boomi in the future and ServiceNow today.
  • A platform that can operate horizontally across these systems. Amazon Q for Business would be an example as is Google Cloud. A hyperscale cloud provider makes the most sense in this horizontal AI agent context. AWS's Generative AI Vision
  • Process mining and optimization capabilities. This process knowhow appears to the missing ingredient to most of these agentic AI visions. Microsoft has process optimization capabilities as does SAP via DataSphere/Signavio and a partnership with UiPath. ServiceNow also has process optimization that rides along with workflows.
  • Orchestration ability because there will be more agents than humans in short order. Building AI agents won't be a problem. Managing them will be.

With that backdrop here's a look at some of the agentic AI developments worth watching.

The agentic AI ERP play

Enterprise resource planning (ERP) platforms can be playing a home game in the agentic AI race since they can connect data and context across multiple functions. SAP CEO Christian Klein was obviously talking up his company's Joule genAI and agent technology during the company’s third quarter earnings call, but he has a point. He said:

"While many in the software industry talk about AI agents these days, I can assure you, Joule will be the champion of them all. So far, we have added over 500 skills to Joule and we are well on track to cover 80% of the most frequent business and analytical transactions by the end of this year. And in Q3 alone, several hundred customers licensed Joule."

Microsoft can also play the ERP to AI agent game. The company launched 10 out-of-the-box AI agents. In Microsoft's view, Copilots are how you'll interact with the agents that will work on behalf of an individual, team or function to execute on processes.

The biggest issue with the ERP focused AI agent plays--or CRM, HR or any other enterprise acronym of your choice--is that you're still locked in with a vendor.

Nevertheless, the combination of AI agents and well-structured data within ERP systems is likely to lead to quick returns. I am surprised that Microsoft didn't connect the dots more between its AI agents in Dynamics 365 and Power Automate.

The other wrinkle in this agentic AI ERP parade is ServiceNow's just announced partnership with Rimini Street along with its Workflow Data Fabric. The subtext: Maintain your legacy ERP system, abstract it with the Now Platform, save with third party maintenance and reinvest the savings in AI automation.

ServiceNow CEO Bill McDermott said on the company’s third quarter earnings call that enterprises want to avoid previous mistakes with ERP platforms and AI agent sprawl.

"The C-suite is looking to us to prevent a mess with AI," said McDermott. "Leaders see the risk that every vendor's bots and agents will scatter like hornets fleeing the nest. Enterprises trust us to be the governance control tower."

The neutral party, orchestration, automation play

At UiPath Forward, UiPath pivoted from robotic process automation to AI agent building and orchestration. UiPath made its name with RPA, process mining and task mining and then created an automation platform.

The new vision for UiPath rhymes with Microsoft's copilot-to-agents approach except the bridge is from RPA bots to agents. UiPath previewed Agent Builder, forged a partnership with Anthropic and set a vision that combines RPA, automation, robots and people to automate end-to-end processes. UiPath's play is that processes don't run in one system so you need a horizontal platform across the enterprise to be an agent conductor.

UiPath CEO Daniel Dines said:

"We can go end to end process automations. We can reduce a lot of human input into processes. We can make humans only the decision makers into a real process. I think many business applications will offer capabilities to create agents. We are very happy to orchestrate them."

Dines said RPA and generative AI isn't a zero-sum game. "Our robots will provide the tools to the agent to connect to all of these platforms," he said. "Robots are low skilled. Agents are more highly skilled employees."

While UiPath's conference was underway in Las Vegas, Celonis held its Celosphere event in Munich. Celonis has focused on process intelligence with its platform and then feeds into various AI models with its digital twins of enterprises.

Celonis has taken an ingredient brand approach. It's worth noting that one session at Celosphere focused on the combination of Celonis Process Intelligence and Amazon Bedrock, which is also likely to play a big role in building AI agents and orchestrating them.

Celonis launched Celonis AgentC, a suite of AI agent tools, integrations and partnerships. Celonis is looking to embed its Process Intelligence into AI agents to add business context. Celonis' first platform integrations include Microsoft Copilot Studio, IBM watsonx Orchestrate, Amazon Bedrock Agents and open-source environments like CrewAI.

Here’s a graphic on how this Process Intelligence integration works with Microsoft Copilot Studio.

"You can now power AI agents with process intelligence," said Celonis co-CEO Alex Rinke. "This is AI that knows how your business flows."

The sales playbook for UiPath and Celonis has been cribbed by multiple agentic AI vendors that have focused on use cases, enterprise functions and industry applications.

ServiceNow is also playing the role of the broad neutral party that can connect various models, workflows and systems. The company has already layered agentic AI into its Now Platform and for that matter could acquire either UiPath or Celonis.

Lingering questions in an evolving landscape

This riff on agentic AI, the role of automation and process optimization is a work in progress because vendor strategies--and yours for that matter--are being cooked up as we speak.

Among the key questions:

  • Can Salesforce leverage MuleSoft to take Agentforce beyond front-office functions?
  • How many first movers in agentic AI will find themselves buried in agent sprawl?
  • Will the neutral parties today ultimately be acquired? SAP and UiPath are already cozy partners. Aside from the trillion-dollar valuation club no vendor is too big to be acquired.
  • Can integrators be the agentic AI orchestrators? For instance, Infosys CEO Salil Parekh said the company has been focused on small language models, use cases and processes to create multi-agent frameworks to automate work. "We have a multi-agent framework where the agents are doing--a set of agents are doing full solutions to certain business processes or certain functions," he said.
  • What role will hyperscale cloud providers take on? AWS doesn't have any applications in this fight and is truly horizontal. It could have a big role in building and orchestrating agents in the background. Ditto for Google Cloud.
  • Does ServiceNow emerge as the agentic AI point guard for enterprises? McDermott is betting that way. "We intend to be the control point that governs the deployment of agentic AI across the enterprise," he said.

With so many agentic AI moving parts, CxOs may want to lump agentic AI plans with broader process transformation and automation strategies. Agentic AI looks great, but keep process, automation and orchestration top of mind.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Revenue & Growth Effectiveness Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Data Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer