Results

Why Every Organization Needs To Rethink Its Growth Strategy in the Age of AI | Big Idea Report

Why Every Organization Needs To Rethink Its Growth Strategy in the Age of AI | Big Idea Report

Constellation analyst Martin Schneider unpacks his latest report, discussing the need for organizations to adapt their growth strategies in the age of AI and emphasizing the importance of the office of the chief growth officer and the role of AI as an enabling technology. He also highlights the need for a modern growth strategy, taking into account the shift from the subscription economy to the retention economy and the post-pandemic economy.

Access the Full Report Here

View the full transcript here: (Please note this has not been edited and may contain errors.)

Martin, hi. This is Martin Schneider, Vice President and Principal Analyst at Constellation Research. I'm excited to talk about a new Big Idea report I've just published called why every organization needs to rethink its growth strategy in the age of AI. And that why is a really big why we've seen a lot of changes in the market and just in the economy over the past 510, 15 years, we really saw this wholesale shift to subscription, slash retention, slash everything as a service economy. But what we've not seen is our go to market strategies really supporting this fully.

We've We've operationalized around product and service delivery with these models. We've done revenue recognition changes things like that. But for a lot of organizations across almost every industry, most people are thinking still in the old way of you know, new business drives everything lead to opportunity. Conversions are the most important metrics. So people really just not thinking about the right metrics and the right approach. And that needs to change right now. What's really been interesting is leading organizations are creating either an office or a role called the chief growth officer. So the C suite is being augmented to include a more strategic leader when it comes to planning for growth, because you need to plan for growth across the entire customer life cycle. It's not just about, you know, elevating a chief revenue officer, right? We're really talking about a strategic member of the C suite here, so that Chief growth officer is becoming more and more prevalent and and more and more leading companies have that role installed.

What's really kind of interesting in supporting that is that ascendancy of what we would call rev ops to more growth ops, really, where you're seeing, you know, again, that strategic middle office, not just being kind of part of the bean counting in the deal desk, but really turning more into a strategic office where we're thinking about approaches to growth, thinking about optimizing workflows and key business processes and taking a leading role in how we utilize technology to optimize our growth strategies, you know, because they really have that interesting they're not as mired in the the weeds that, like sales leaders are.

They're not totally just thinking about campaigns and content the way a marketing might be. And they're not stuck with, you know, fire drills and keeping customers happy the way chief customer officers are. They can really attack the issue strategically and have that really great vantage point. So that's really interesting. And of course, in the age of AI, in the title, AI is a catalyst, but it's not driving these changes, these changes and these, you know, disruptions have been here, and they're affecting us. The great thing about AI is it allows even the smallest organization to take on these challenges head on, a little faster, a little better, a little cheaper, right? So it really is a catalyst and an accelerator for kind of rethinking your growth strategy, but it's not the reason we're making these changes, right? So it's important to have that perspective. Also in the report, I talk about four elements of a modern growth strategy. And you know, some foundational elements right that you really need to be thinking about as you rethink your approach to growth. And then finally, 10 questions that you can ask your self or the organization or your C suite as you rethink your growth strategy, or even just level set, and say, you know, where are we in our growth strategies?

So again, this is a really important report for growth leaders, for Chief growth officers, for CROs and for other C suite who are really thinking about growth and understanding the challenges, the pressures and all the changes we've been facing over the last, you know, decade plus, and how we can really approach these head on, leveraging AI to make it maybe a little easier, A little more cost effective and get more outsized results for these new strategies. So it's definitely a report worth checking out. If you're a client, you can access it in the library today, and if you're not, just contact us and find out how to be a client, because this is a this is kind of a can't miss report for growth leaders. Thanks a lot. You.

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/qvNyB1BEXe4?si=dVA5cDBZEP-SYHy9" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Big Idea: Why Every Organization Needs to Rethink its Growth Strategy in the Age of AI

Big Idea: Why Every Organization Needs to Rethink its Growth Strategy in the Age of AI



A new Constellation "Big Idea" report has just been published, written by me, and covers the topic of "Why Every Organization Needs to rethink its Growth Strategy in the Age of AI." And I know, it sounds pretty heavy and ominous - but I do believe most organizations are working from outdated and/or incomplete approaches to planning for growth. 

Today, nearly all industries are being disrupted by multiple factors. Big changes such as the shifts to subscription/retention economic models, the impact of AI, and the need to deal with rising costs have necessitated a new perspective on growth. Meanwhile, customer expectations are changing - and they are not asking but rather expecting or demanding that you meet them where and when they want. No longer can B2B organizations look solely to sales and marketing as the bastion for growth. Instead, a wider view of growth is needed—one in which all customer-facing departments contribute in a more reliable and scalable manner to overall lift.

And while we have amended our service/product delivery models to meet these changes, even created "customer success" departments and motions to support them - there is still a huge gap in how high level growth strategies account for these new realities. That has to change. 

In this report - I explain the big "Why" in terms of all of these disruptions and drivers, and why taking an approach to growth that incorporates a "full journey" approach pays off in terms of more profitable, scalable growth. To meet these changes, this report highlights the need for a new C-suite member: the Chief Growth Officer (CGO) - a role/committee with a more strategic position and perspective. While CROs and CMOs have important roles, often their remit and the metrics they use for success are rarely aligned with actual full journey orchestration that supports a modern growth strategy. Supporting the CGO, revOps teams are ascending into "GrowthOps" departments - elevating the middle office from more tactical "deal desk" support to more proactive, strategic stakeholders. RevOps has a unique position in terms of having a "crow's nest view" of go-to-market operations, and can be more strategic and process-optimization oriented while sales, marketing and support/success leaders are often too much in the tactical weeds to think long-term strategy. 

And while AI is a major catalyst and accelerator when it comes to modernizing approaches to growth - it is not THE driver. The disruptions and challenges faciung growth leaders predates this AI revolution - but the good news is that growth leaders can leverage AI to reimagine growth strategies faster, and with less heavy lifting than ever before. In the report we delve into some go-to-market use cases and where AI can be applied with the least effort, to drive the most results. 

The report also offers up four elements of a modern growth strategy - and how your organization cana dopt them. Finally, the report provides 10 key questions to ask when rethinking your growth strategies - delving into critical issues that can assit any organization wherever they may be on their growth journey. 

This report is available now for Constellation clients in our Research Library. if you are not a client and interested in accessing the report, you can contact us at [email protected] to learn how. 

Marketing Transformation Revenue & Growth Effectiveness Chief Revenue Officer

BioNtech, InstaDeep bet on genAI models to advance R&D, drug discovery, cancer treatment

BioNtech, InstaDeep bet on genAI models to advance R&D, drug discovery, cancer treatment

BioNtech's InstaDeep, which was acquired in 2023 for about $682 million, has released a series of foundational generative AI models for proteins and DNA and released them on its DeepChain platform and outlined a supercluster called Kyber.

The news, outlined at BioNtech's AI innovation day, highlights how foundational models are branching out into industry specific use cases. In BioNtech's case, its InstaDeep unit is looking to embed AI throughout the life sciences, R&D and drug discovery value chain.

InstaDeep has even created an AI-driven lab agent built on its proprietary data and Meta's Llama family of models.

BioNtech in recent years has been best known for its COVID-19 vaccine partnership with Pfizer. However, BioNtech historically focused on mRNA cancer treatments. BioNtech is betting that AI can drive its drug pipeline for years to come with its acquisition of InstaDeep, which counted Google as an investor. BioNtech and InstaDeep formed a joint AI lab in 2020 and the partnership quickly accelerated.

Ugur Sahin, CEO BioNTech, explained the company's bet on InstaDeep, which also has its own supercomputing cluster called Kyber. Kyber is coming online in Paris and enables InstaDeep to train its own foundational models without the cost and queue involved with cloud computing.

Sahin said:

"Every cancer treatment for every patient is a new battle. Every cancer cell is different. How can we develop treatments that address tumor cells? Cancer is evolving. Cancer is adaptable. This has now become a high-level computational question."

Sahin added that cancer treatment in the future will start with clinical samples from the patient and an analysis of genetic changes in tumor cells that will generate about 4 terabytes of data for each patient. "We need AI, machine learning and algorithms to come to the right conclusions," he said. "AI gives us the opportunity to do that at a much deeper and faster scale."

Is BioNtech a biotech company or an AI company? Both. Life sciences and AI are likely to become symbiotic.

Ryan Richardson, Chief Strategy Officer at BioNTech, said the company is looking to build an "AI personalized immunotherapy platform." The value drivers for the InstaDeep purchase revolved around cost efficiencies from internalizing model training, building foundational models for vaccines and therapeutics and applying AI to drug discovery.

"The primary use case is to embed AI in drug discovery with the ability to combine our therapeutic platforms on one hand, which are very novel, and the AI capabilities that InstaDeep brings to bear," said Richardson. "There is truly profound disruptive potential in terms of developing or discovering new drugs."

Karim Beguir, CEO and Co-Founder of InstaDeep, said the goal is to work with BioNtech closely to become "a leader in digital biology." Beguir added that for InstaDeep and BioNtech to lead in digital biology his company also needs to be a leader in AI. "The same technology can apply to multiple use cases," said Beguir. "We are leaders in industrial optimization within biology and outside of biology these add up together. The objective is to continue to be a leading power in the world of AI."

Here's a look at what InstaDeep is working on as part of BioNtech.

A supercomputing cluster named Kyber. Beguir said the Kyber supercluster is built on 224 Nvidia H100 GPUs, 86,000 CPU cores, 1.7 petabytes of persistent storage and 400 Gbps RoCE network. The cluster, built on-premises with Dell, totals about 0.5 ExaFLOPs and is one of the top 20 H100 GPU clusters globally.

"We are now able to take all the work that we have built upon over the last several years and scale it up over the next five, six, seven, 10 years," said Beguir.

InstaDeep uses an in-house rack design that's easy to expand with modular nodes that offer consistent performance, cost, power and cooling. Standard designs will minimize costs over time. InstaDeep also tailored its AI software stack to its workloads with open standards.

Beguir said InstaDeep built the supercluster to avoid vendor lock-in and benefit from predictable costs while scaling models. Kyber enabled InstaDeep to train genAI models with more than 15 billion parameters with hardware efficiency on part with the latest Meta Llama 3.1 foundational model.

Bayesian Flow Networks (BFNs), a new class of generative model that uses Bayesian inference to update beliefs about data. BFNs generate discrete data in a continuous way and are better suited for proteomics and modeling protein folding, function prediction, antibody design and sequence generation.

InstaDeep wants to use BFNs to build foundational models based on heterogeneous scientific data to give scientists more flexibility. A model called AbBFN-X is designed to be a multimodal model for antimodels with 26 different attributes jointly modeled.

DeepChain, a platform designed to use AI to accelerate the R&D pipeline, gains new features. DeepChain is getting generative protein models, ProtBFN and AbBFN, and foundational models for DNA, Nucleotide Transformer and SegmentNT. These models, which can be customized and fine-tuned, are available on Hugging Face under the genomics tag.

Laila AI agents built on Meta Llama 3.1. Laila is integrated throughout the DeepChain platform and can recommend models and analyze data with internal and external tools. Laila can also visualize results, plot data and zoom in on certain DNA sequences and positions.

InstaDeep executives said that Laila, which comes in multiple sizes, is more than a chat bot and can use its expert knowledge of biology to reason, make decisions and provide feedback.

The company is also working to leverage its models across scientific and R&D workflows. InstaDeep has designed AI tools to automate labs, annotate tissue, segment pathology images and identify novel therapeutic targets.

Constellation Research analyst Holger Mueller said:

"While most of the CxO attention is on cloud platforms and AI vendors when it comes for the latest on genAI, there is substantial innovation coming from the biotech industry as well. BioNtech (where the founder would go on vacation with its workstation) acquired its own AI startup and it is showing significant progress on what matters at the moment - the 'uber ai' that chooses the right AI / statistical models for positive outcomes in protein folding, cancer research and more. It's good to see more AI model competition, especially coming from a practitioner."

More:

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AR AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Accenture to use Nvidia stack for agentic AI

Accenture to use Nvidia stack for agentic AI

Media Name: Screenshot 2024-10-03 063117.png

Accenture has formed a Nvidia Business Group that will deploy agentic AI using Nvidia's full stack. The move puts some systems integrator heft behind Nvidia's software ecosystem.

As noted during Nvidia's recent second-quarter earnings call, the real competitive moat around the chip giant's business is its software ecosystem. That ecosystem also enables Nvidia to sell its GPUs and AI accelerators. Nvidia made a series of announcements highlighting its ability to leverage its software ecosystem to boost performance. In addition, Nvidia is looking to make it easier for enterprises to bring generative AI projects from pilot to production.

In a statement, Accenture said its Nvidia Business Group will include 30,000 consultants trained on the chipmaker's stack. Accenture's AI Refinery platform will also leverage Nvidia's architecture and foundational models. Accenture said it has booked more than $3 billion in generative AI business

Specifically, Accenture will combine its AI Refinery and Nvidia's AI Foundry, AI Enterprise and Omniverse to focus on process optimization, simulations and sovereign AI. The latter has been identified by Nvidia and other infrastructure players like Oracle as a hot market.

The companies will also use Nvidia NIM Agent Blueprint for virtual facility robot fleet simulation and combine it with Eclipse Automation, an Accenture unit that focuses on manufacturing automation.

Nvidia's uncanny knack for staying ahead

Accenture said AI Refinery will be available on all public and private cloud platforms. Accenture said it has used its AI Refinery and AI agents to cut manual steps in campaigns by 25% to 35%, save 6% and increase speed to market by 25% to 55%.

More on agentic AI:

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Nvidia drops NVLM 1.0 LLM family, revving open-source AI

Nvidia drops NVLM 1.0 LLM family, revving open-source AI

Nvidia released NVLM 1.0, an open-source large language model family that includes a flagship 72B parameter version NVLM-D-72B.

The effort, detailed in a research paper, means Nvidia is also championing frontier open source LLMs. Previously, Meta and its Llama family of LLMs were leading the open-source model wave.

According to Nvidia researchers, NVLM 1.0 improves text-only performance after multimodal training. The Nvidia models have an architecture that enhances training efficiency as well as multimodal reasoning.

Nvidia also released the model weights for NVLM 1.0 and will open source the code. The move is notable since proprietary models don't release weights.

It remains to be seen whether LLM giants will follow suit. Regardless, NVLM 1.0 will enable smaller enterprises and researchers to piggyback off Nvidia's research. One thing is certain: LLM innovation is picking up pace. 

More on Nvidia:

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity nvidia AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

MongoDB 8.0 generally available, along with Atlas updates

MongoDB 8.0 generally available, along with Atlas updates

MongoDB said its MongoDB 8.0 is generally available with throughput optimizations and efficiency enhancements.

At its MongoDB.local London event, MongoDB 8.0 went to GA along with other enhancements to the company's Atlas platform. MongoDB 8.0 is available on AWS, Google Cloud and Microsoft Azure through MongoDB Atlas, MongoDB Enterprise Advanced for on-premises users and MongoDB Community Addition.

MongoDB has a popular document database that is being used in generative AI applications looking to tap into unstructured data. MongoDB said its latest database has more than 45 architecture enhancements. Here is a rundown of what MongoDB announced:

  • MongoDB 8.0 improves throughput by 32% to query and transform data and reduced memory usage. The database has sped up bulk writes by 56% and concurrent writes during data duplication by 20%. With high volumes of time series data and complex aggregations, MongoDB 8.0 can run 200% faster with lower costs.
  • Sharding improvements in MongoDB 8.0 mean it can distribute data up to 50 times faster at 50% lower costs.
  • Better controls to optimize performance for high demand and spikes in usage. MongoDB enables customers to set a default maximum time limit for running queries, reject recurring problem queries and run through events like restarts in peak demand.
  • MongoDB Queryable Encryption to allow customers to encrypt sensitive application data, store it as randomized encrypted data and run queries on encrypted data for processing without cryptography expertise. Data will remain encrypted until it reaches an authorized user with a decryption key.
  • The company enhanced MongoDB Atlas' control plane to scale cluster faster and optimize performance. Atlas customers will see up to 50% quicker scaling times. Auto-scaling will also see 5x improvements.
  • A private preview for MongoDB for IntelliJ Plugin, which is a popular developer environment for Java.
  • The company announced a public preview for MongoDB Participant for GitHub Copilot, which integrates AI tools in a chat experience in the MongoDB Extension for VS Code.
  • MongoDB added support in MongoDB and Ops Manager for multiple Kubernetes clusters. Customers can deploy ReplicaSets, Sharded Clusters and Ops Manager across local or distributed Kubernetes clusters.
  • MongoDB Atlas Search and Vector Search are generally available via Atlas CLI and Docker. MongoDB also announced vector quantization for Atlas Vector Search. The move reduces memory by up to 96%.
  • Integration between MongoDB and large language model frameworks such as LangChain, LlamaIndex, Microsoft Semantic Kernel, AutoGen, Haystack, Spring AI and ChatGPT Retrieval Plugin.
Data to Decisions mongodb Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Cerebras Systems preps IPO: What you need to know

Cerebras Systems preps IPO: What you need to know

Cerebras Systems has filed for a US initial public offering, but this alleged Nvidia competitor has multiple risk factors and depends on one UAE-based customer, Group 42 Holding, for 87% of its revenue.

Despite the risk, Wall Street will closely watch Cerebras Systems given a weak IPO market and the need for more competition in the genAI infrastructure market. Cerebras also argued in its IPO filing that it has a more power efficient approach to AI training and inference workloads.

Cerebras Systems launches Cerebras Inference, touts performance gains over Nvidia H100 systems

Here's what you need to know.

  • Cerebras' processors are 57 times larger than the leading commercially available GPU. The company argues that its Cerebras Wafer-Scale Engine (CS-3) is the largest ever sold. It has 52 times more compute cores and 88 times more on-chip memory and 7,000 more memory bandwidth than leading GPUs. By keeping operations on the wafer, Cerebras' processors can solve problems faster with less power. Cerebras said its processors are designed for both AI training and inference.

  • Wafer size matters. Cerebras is arguing that its approach with massive processors means that it can scale more easily for model training. The approach also works for memory bandwidth requirements for inference. The crux of the Cerebras argument is that individual GPUs are too small, scaling GPUs is inefficient and limited by memory bandwidth.
  • The company is a play on on-premises generative AI. Organizations can purchase Cerebras AI supercomputers for on-premise deployments. Cerebras said its AI Supercomputer can scale up to 2,048 CS-3 systems or 256 exaFLOPs. The plan for Cerebras is to court sovereign AI initiatives, governments, cloud service providers and research institutions.

  • The largest Cerebras cluster deployed as of Sept. 27 comprised of more than 100 CS systems so there's a lot of headroom to scale further.
  • Cerebras relies mostly on Group 42 Holding Ltd., a UAE company, for its 87% of its revenue for the six months ending June 30. Cerebras said that Group 42 (G42) can acquire $335 million worth of Cerebras shares to give it more than a 5% stake.
  • But Cerebras is raising capital to expand its customer base. The company, however, has a long way to go. Two customers accounted for 68% and 16% of Cerebras' accounts receivable balance as of June 30.
  • Cerebras is losing money. For the six months ended June 30, the company posted a net loss of $66.6 million on revenue of $136.4 million. For the year ended Dec. 31, Cerebras had a net loss of $127.15 million on revenue of $78.74 million.
  • The company could be hampered by US export regulations. Specifically, Cerebras must comply with U.S. Export Administration Regulations (EAR), which are administered by the U.S. Department of Commerce’s Bureau of Industry and Security (BIS), as well as economic and trade sanctions, including those administered by the U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC). Given nearly all of Cerebras' revenue is tied to a customer in UAE those import and export control laws loom large. Cerebras did note that it has obtained a BIS export license to sell its CS-2 systems in UAE.
  • TSMC is the lone manufacturer of Cerebras systems. In its IPO filing, Cerebras said: "We worked with TSMC to develop the processes necessary to manufacture the semiconductor wafers needed for our wafer scale engine, which involve many complexities and proprietary technologies. We are currently dependent on TSMC to produce all of the wafers that we use in our products. We have no formalized long-term supply or allocation commitments from TSMC, and TSMC also fabricates wafers for other companies, including certain of our competitors, many of whom are significantly larger than us and purchase considerably more wafers from TSMC than we do."
  • Competition is fierce. Not surprisingly, Cerebras' competitive set is well funded. Nvidia, AMD, Intel, Microsoft and Alphabet were cited as competitors for AI workloads.

Related:

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Liquid AI launches non-transformer genAI models: Can it ease the power crunch?

Liquid AI launches non-transformer genAI models: Can it ease the power crunch?

Liquid AI, an MIT spinoff, launched its Liquid Foundation Models (LFM) in three sizes without using the current transformer architecture used by large language models (LLMs) with good performance.

According to Liquid, LFMs have "state-of-the-art performance at every scale" and have a smaller memory footprint and more efficient inference than LLMs. That efficiency could mean that LFMs will use less power and be more sustainable. After all, electricity is one of the biggest limiting factor for AI workloads and one big reason for the renaissance of nuclear power.

Simply put, we need more efficient models than taxing the grid, using too much water and throwing more GPUs at the issue. "Liquid AI shows that leading models don't have to come from deep pocketed large players and can also come from startups. The intellectual race for AI is far from being over," said Constellation Research analyst Holger Mueller. 

LFMs are general-purpose AI models for any sequential data, including video, audio, text, time series and signals. Liquid will hold a launch event October 23 at MIT Kresge in Cambridge to talk about LFMs and applications in consumer electronics and other industries.

LFMs come in three sizes--1.3B, 3.1B and 40.3B mixture of experts (MoE)--and are available on Liquid Playground, Lambda, Perplexity Labs and Cerebras Inference. Liquid AI said its stack is being optimized for Nvidia, AMD, Qualcomm, Cerebras and Apple hardware.

The smallest model from Liquid AI is built for resource-constrained environments with the 3.1B model focused on edge deployments.

Although it is early Liquid AI plans to "build private, edge, and on-premise AI solutions for enterprises of any size." The company added that it will target industries including financial services, biotech and consumer electronics.

Liquid AI has a bevy of benchmarks comparing its LFMs to LLMs and the company noted that the models are a work in progress. LFMs are good at general and expert knowledge, math and logical reasoning, long-context tasks and English. LFMs aren't good at zero-shot code tasks, precise numerical calculations, time-sensitive information and human preference optimization.

13 artificial intelligence takeaways from Constellation Research’s AI Forum

A few key points about LFMs and potential efficiency gains.

  • Transformer models' memory usage surges for long inputs so they do not do edge deployments well. LFMs can handle long inputs without affecting generation speed or the amount of memory required.
  • Training LFMs require less compute compared to GPT foundation models.
  • The lower memory footprint means lower costs at inference time.
  • LFMs can be optimized for platforms.
  • LFMs could become more of an option as enterprises start to hot swap models based on use case. If LFMs do become an option more efficiency and lower costs would favor their increased adoption.

Bottom line: Liquid AI's LFMs may steer the conversation more toward efficiency over brute strength when it comes to generative AI. Should genAI become more efficient it could upend the current economic pecking order where all the spoils go to infrastructure players--Nvidia, Micron Technology, Broadcom and others.

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Intuit embraces LLM choice for multiple use cases

Intuit embraces LLM choice for multiple use cases

Intuit is operating on one data and AI platform that enables it to select up to 10 large language models for various consumer and business use cases via its Generative AI Operating System (GenOS).

The ability to select multiple large language models (LLMs) gives Intuit the ability to leverage genAI for use cases with a few clicks and build in redundancy. Intuit is looking to leverage a unified data and AI platform to solve customer problems and bring in human experts when needed.

Speaking at Intuit's Investor Day, CTO Alex Balazs, said GenOS is transformative to the company's platform plans. Intuit made a big bet five years ago on one data platform and AI as a way to expand its total addressable market. Now Intuit TurboTax, Credit Karma, QuickBooks and Mailchimp run on a unified platform as well as GenOS, which powers Intuit Assist.

"Developers leveraging our platform and GenOS now have access to more than 10 LLMs," said Balazs. "They're able to easily select the right large language model that solves the specific customer use case. GenOS also allows us to seamlessly switch between LLMs to provide resiliency so the customer has a smooth experience."

Intuit also recently announced AI Workbench, a dedicated development environment for AI native experiences, said Balazs. Earlier this month, Intuit outlined enhancements to GenOS including AI Workbench as well as updates to GenStudio, GenRuntime and GenUX.

According to Intuit, GenOS AI Workbench includes an LLM Leaderboard for use cases, prompt management, an automated evaluation service for LLMs and traceability for prompt workflows. Other updates include:

  • An LLM sandbox in GenStudio that includes Anthropic Claude via Amazon Bedrock, Gemini from Google Cloud, Llama from Meta AI and Mistral AI to complement custom LLMs and OpenAI GPT models via Microsoft Azure.
  • GenRuntime, a layer that includes GenOrchestrator to plan, execute and retrieve knowledge and tools for agentic workflows.
  • GenSRF (security, risk and fraud) that has guardrails for genAI deployments.
  • GenUX, which includes more than 140 new UX components, widgets and patterns for developments.

The company's strategy highlights how enterprises leading in genAI are building platforms that are able to hot swap models as they advance. Intuit has built its infrastructure on Amazon Web Services and said last year at re:Invent that GenOS uses Amazon Bedrock as well as multiple services including Sagemaker.

Intuit's selection of models in GenOS is a subset of what's available on Amazon Bedrock. For instance, Meta has more than 10 Llama models available on Amazon Bedrock with providers such as A121, Anthropic, Cohere and Mistral offering more than a handful of foundational models.

This model choice is also increasingly being offered by software as a service providers, which are packaging a selection of models that can be used to build AI agents. Demonstrations of Salesforce's Agentforce platform highlighted the ability to select models to build agents.

Model selection is a key cog in what Balazs calls Intuit's durable advantage--its data and AI platform and ability to enable machine learning, natural language processing and LLM development to embed fintech throughout its environment.

Balazs said Intuit's model agnostic approach allows the company to future proof its platform as it chases its five big bets: Revolutionize speed to benefit, connect people to experts, unlock smart money decisions, be the center of small business growth and disrupt the mid-market.

"We're going to continue to innovate and look ahead and determine the best way to serve our customers, especially as the AI landscape continues to rapidly evolve. Almost every day, there's some type of announcement of some new AI capability."

Indeed, Intuit demonstrated the use of digital avatars as a way to provide guidance and insights to customers. Balazs said that avatars will help people retain information and learn. The goal would be to couple LLMs, genAI and avatars to deliver human-like experiences that seamlessly hand off to human experts.

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

13 artificial intelligence takeaways from Constellation Research’s AI Forum

13 artificial intelligence takeaways from Constellation Research’s AI Forum

Agentic AI is going to hit sprawl quickly, boardrooms are being reconstituted over fears of being left behind, genAI is still mostly an experiment with fuzzy returns and old-school issues like change management still determine whether companies successfully move from pilot to production.

Those are a few of the takeaways from Constellation Research's AI Forum. Here's a look at everything we learned at the AI Forum in New York.

The board of directors is driving the AI conversation. AI is clearly a boardroom issue, said Betsy Atkins, CEO of Bajacorp. "What boards have figured out is that if they don't lean in and adopt AI and technology they're going to be left behind," said Atkins.

Boards are also being reconstituted for AI. "I see boards shifting in terms of cohorts," said Atkins, who said enterprises are creating boardrooms that can look at technology as well as new business models to differentiate.

However, the board also wants ROI. Atkins said that enterprises are looking at use cases with quick ROI because boards now realize how expensive AI can be.

Change management is more important than technical capability in production generative AI deployments, Michael Park, SVP, Global Head of AI GTM at ServiceNow. Park added: "I think getting the data structure ready and the instance ready is the easy part. That's just the tech, and there's hard work that needs to be done around it. The challenge that we're seeing right now is the organizational change management and getting people to see what's possible. Change management has been the biggest struggle. The tech is real."

Agentic AI. "There's no doubt that agentic AI is the future," said Park. He said there will be two domains of AI agents. One will augment a human being to supercharge capabilities. And another domain will be an aggregated set of agents that work on behalf of a unit. "I think every job is going to be affected in some ways and transform productivity for employees and customer experiences," said Park.

Attendees at the AI Forum generally agreed that the agentic AI wave is real, but doubted the technology has quite caught up with production use cases yet. That skepticism sure hasn’t stopped vendors from talking about agentic AI though.

In recent weeks, Salesforce, Workday, Microsoft, HubSpot, ServiceNow, Google Cloud and Oracle all talked about AI agents and likely overloaded CxOs who have spent the last 18 months trying to move genAI from pilot to production. Other genAI front runners—Rocket, Intuit, JPMorgan Chase--have mostly taken the DIY approach and are now evolving strategies.

Agent orchestration will be needed quickly because overload will be here soon. Boomi CEO Steve Lucas said that the number of AI agents will outnumber the number of people in your business in less than three years. "The digital imperative is how do I work with agents? The number of agents will outnumber the number of humans in less than three years," said Lucas. Fun fact: Constellation Research analyst Holger Mueller thinks Lucas prediction is way conservative.

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly and is brought to you by Hitachi Vantara.

Healthcare is expected to be the most transformed industry by AI. Multiple attendees and panelists at AI Forum noted that healthcare will see the most transformational impact from AI. Anand Iyer Chief AI Officer at Welldoc, said data and AI can transform outcomes and be more preventative. Iyer said: "You can actually figure out what cocktail of exercise, food, stress, reduction, and all of these vectors that drive somebody's own health. You can figure out the exact cocktail that works for Person X, in a way that fits into their life flow and their clinician's workflow."

There may be a catch with AI health transformation. AI will bring costs down initially but may end up being more expensive due to the level of personalization.

Trust in AI will take time. Scott Gnau, Vice President of Data Platforms Intersystems, said every technology wave requires time to earn trust. Gnau noted building trust in a technology can take years, but AI has a chance to earning user trust quickly. "One or two bad answers can set generative AI back, but the addition of provenance and governance helps," said Gnau. "One of the game changers is that AI can actually be used to explain the provenance of the answer. I think we have a unique opportunity to accelerate trust."

Generative AI is still in the science experiment phase. Gnau added that there's a degree of FOMO with deploying AI. "Is generative AI or a large language model the right tool for every problem out there? Absolutely not. There are things that you've built that run your business today that are good so don't suck all of the budget away and let them crumble," said Gnau. "Make those systems better with AI and use the right tool for the right processes."

The AI playbook isn't fully baked. In a pop-up survey at the AI Forum, 35 CxOs indicated that they are trying a little bit of everything when it comes to AI (good thing they could give multiple answers). Respondents indicated that they were using multiple approaches to build AI capabilities. The majority (79%) said they were developing home-grown AI services on hyperscale cloud services and 48% were also using open-source frameworks and large language models. Many of these efforts included AI embedded in packaged applications that they already used such as Salesforce, Adobe, Oracle, SAP etc.

14 takeaways from genAI initiatives midway through 2024

Data quality remains the biggest hurdle in generative AI deployments. "Data quality is the biggest roadblock to realizing generative AI's full value. You need a data driven strategy combined with a model driven strategy and then you can iterate quickly," said Michelle Bonat, Chief AI Officer of AI Squared. But without a focus on data quality, your models won't be good enough to use.

The role of the Chief AI Officer. Chief AI Officers will need to know a lot of business functions and technology much like CIOs and CTOs, but to lead AI strategy you'll need to know the technologies. "I think it's necessary to have someone with a good knowledge of AI," said Phong Nguyen, Chief AI Officer of FPT Software, which is based in Vietnam. "You need to have the deep technical skills and understand what AI can bring."

Minerva Tantoco, CEO of City Strategies LLC, agreed. "When something is relatively new with a lot of potential it really does require a strong alignment with the goals of the organization," she said. "Once you set a strategy it becomes the fabric of the enterprise. But in the beginning, you want the chief AI officer to have a really strong background in AI. This is a transformational role."

AI leaders need to be trilingual. Tantoco said AI leaders need to be trilingual in technology, business and governance and compliance. "At this stage, you need to collaborate across multiple disciplines while leaning into the strong technical background," she said.

David Trice, CEO of inZspire AI, said AI leaders have multiple roles to juggle. First, enterprises need to drive AI or they'll fall behind. Trice echoed Tantaco's sentiment that AI leaders need to bring multiple threads together. "Product, data and AI innovation need to be at the table with legal, compliance and security," said Trice.

Human led AI or vice versa? Chris Nicholas, President and CEO Sam's Club, said artificial intelligence is enabling the company to "take 100 million tasks out of our clubs" even though it has more associates. Yet he has a clear view on who leads the AI charge: Humans. AI is about freeing humans from the mundane to solve customer problems.

AI and human rights. But just in case AI does kill jobs it's worth pondering a human rights update for age we're entering. Will there be a reskilling safety net and the ability for humans to pursue their passions for a living? A workshop on AI and human rights surfaced a lot of thoughts about the right to work as well as the right to opt out of AI and what's likely to become augmented humanity. One prevailing thought was that we are working towards using AI to augment human intelligence. In the future, that pecking order will be reversed and human intelligence will augment AI.

More from AI Forum:

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Executive Officer Chief Information Officer Chief Data Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer