Matt Wood, Vice President of AI at AWS, outlined how enterprises will mix and match multiple models depending on use case, the need for orchestration and how regulated industries may have an advantage in adopting genAI.

Wood spoke with industry analysts including me and Doug Henschen at AWS' New York offices. Here are some of the key themes to note.

AWS is differentiating on ensuring customers retain IP and confidentiality. Unlike competitors, Wood says AWS "does not mix training data for models with customer data" and "does not allow any human review" of training data. The key competitor cited was Azure OpenAI.

Wood said:

"There is a schism appearing in some customers' minds, where you have to be willing to give up some level of IP protection or confidentiality or privacy of your data in order to be successful with generative AI. And that just is not the case on AWS. Customers are unwilling to do that in any of the industries, particularly those in regulated industries."

Regulated industries are traditionally known as technical laggards, but these enterprises are all over generative AI. Regulated industries such as financial services, insurance and health care are "moving slightly faster than the average" on generative AI.

Wood said:

"A lot of the regulatory approaches and compliance that those organizations have been working on for the past 20 years actually set them up very well to work with generative AI. All the governance, privacy and security data standards allow you to be able to get to utility and value with generative AI very quickly."

"Companies are starting to develop muscle around model evaluation" and they are "no longer tightly coupling to single models," said Wood. Doug Henschen's take: "That's consistent with what we hear from GCP and even Microsoft, but Wood insists AWS will be "Switzerland" and Bedrock will remain differentiated on model diversity."

Wood said model choice is the only way to go in the long run. "Other providers are very married to a very small subset of models. And what that means is that customers end up having to approach a model a bit like a Swiss Army knife, which sounds great, except If It's a contractor turned up to fix your house and all they had was a Swiss Army knife, you would not be very happy."

Ultimately, enterprises will toggle between speed, use case and cost when managing model portfolios. Wood added that this portfolio management of models is already carrying over to compute. He said if training speed is needed Nvidia GPUs get the call, but if cost is more of a factor AWS' Trainium chip is an option. Wood said interest among the customer base is split about 50/50 between the need for speed and cost.

Model orchestration will become critical. "What we've seen is that a big part of success in the actual broad production use of generative AI is being able to access the right model for the right use case," said Wood. "Different models operate and have different use cases and have different sweet spots. The real kind of superpower for generative AI is the combination of those models and the compounding effect on the aggregate intelligence of the system."

Orchestration of models in Bedrock will evolve over time so enterprises can leverage multiple models. Wood said orchestration of models today requires hardcoded rules to data, but as models improve to handle more tasks generative AI will be able to play point guard better.

Q leverages multiple models based on specialty and use cases. "Amazon Q is the easy button for Generative AI" and it provides all the GenAI many companies want to use, said Wood. He specifically touted "a step-function change" in digital transformation and migration projects, such as Windows to Linux moves. Henschen said cloud migration will be an important use of GenAI and an accelerator and cost saver for customers looking to migrate off legacy platforms and onto modern, cloud platforms.

Wood noted that Q runs on a variety of models via a series of expert agents.