Enterprises need to focus on data lakehouse strategies in 2024 to properly take advantage of generative AI; model architecture will be critical to managing large and small models; fine tuning is more difficult than you'd think; and CXOs were weary of database vendors glomming on to genAI hype.

Those were some of the takeaways from Constellation Research's April 5 BT150 CXO meetup.

These gatherings, held under Chatham House rules, are a venue to share information and emerging trends.

Here's a look at the topics from our February meetup.

Fine tuning isn't as easy as you'd think. While fine tuning and customizing a foundational model should be easier than training a large language model from scratch, the process is more involved. The tooling isn't mature enough yet for fine tuning at scale and enterprises are evaluating where to host data.

Get that data lakehouse. Enterprises are coming around to the reality that they need to have a data strategy before even thinking about AI. Considerations include:

  • Ability to move data to models in real time.
  • Need to combine enterprise data with third party data.
  • Costs.
  • Need for real-time data ingestion.
  • Build your own enterprise data lakehouse.
  • Benefits of data lakehouse strategy is that the business will also see business intelligence and analytics benefits and the promise of big data.
  • Take 2024 to nail the data lakehouse strategy to prepare your company for AI.

Foundational model strategy. CXOs and Constellation Research analysts expect industry and role specific models to emerge. In addition, enterprises will need to have model strategies that incorporate approaches that use language that applies to industries and companies specifically. Enterprises will need to think through model architectures to manage models for finance, HR, manufacturing, and other roles.

Kill switches. There was a good debate on our call about the need for an AI kill switch. On one side, models will be hacked and when that happens, you'll need to be able to pull the plug and recover. The argument against the kill switch concept is that other parts of the enterprise don't automatically shut down.

CXOs were exhausted by transactional database vendors that are glomming on to the generative AI hype. These vendors are mostly concerned about you moving your data away from them. Wait until you see a feasible generative AI solution from database vendors before falling for the hype.

Small models vs. large ones. LLMs could be seen by enterprises as boiling the ocean and many CXOs and vendors are talking about smaller models that are specific to a task or process. The reality is that models will require a hybrid approach. Some models will be large, some small and some will be run locally too.

Model suites will always win? While CXOs will take a best-of-breed approach to models due to conditions and hardware limitations, but ultimately the suite approach is likely to win. Specialist models will be better for some tasks, but economies of scale over time favor generalists and suites. The feedback loop of more data and context is likely to favor large models. Beware of small model chatter from vendors without a comprehensive AI strategy or access to a large language model.

Generative experiences with avatars. One CXO was piloting a series of avatars to personalize generative experiences by language and use case. This avatar meets genAI approach appears to be positive for the host and participant. The CXO noted that starting with a framework, governance and privacy controls is a key enabler for generative AI use cases.

AI will create interesting dynamics in the labor pool. Analytics and data science roles are likely to be impacted despite what recent surveys have indicated. A college student with the free time to experiment with prompts can replicate the experience of someone doing predictive analytics and data science for decades. Simply put, the entire skill model for enterprises is going to change.

High performance computing will change due to generative AI. HPC is going to have to evolve since it is in the middle of the generative AI revolution. Nvidia's Blackwell launch featured a series of GPU clusters that will likely compete with supercomputers. Generative AI workloads will fundamentally change compute.