When it comes to generative artificial intelligence (GenAI), innovation has outpaced experimentation and adoption. Enterprises are struggling to keep up with the pace of innovation in GenAI. If they can’t experiment, adapt, and implement faster, their competition will. Even worse, someone from an adjacent industry will swoop in and sweep their customers off their feet. To avoid this, many enterprises are simultaneously conducting multiple experiments with multiple GenAI providers. The problem is that the GenAI offerings are all over the place and it is hard for enterprises to bring them into their fold in a way that abides by their corporate governance guidelines.
Apart from governance, security, privacy, and other logistical concerns, there is also the issue of the procurement process. It is hard for large enterprises to randomly choose vendors, do experiments, sign contracts, and execute to perfection in a platform unknown to them and with a technology that is unproven at best. They most often look for their established provider to offer a similar solution, equal to or better than what the market is providing. Most large enterprises already use many of the Amazon Web Services (AWS) offerings for cloud, storage, and some AI functions and features. With its new set of announcements, AWS is trying to establish a major presence in the AI area—GenAI in particular.
Amazon is innovating in multiple layers to provide a more efficient, optimal GenAI stack, in an effort dubbed “Unlocking the value of Generative AI,” which was on full display at the re:Invent 2023 conference (see Figure 1). Rather than confusing the audience with multiple layers, the company simplified its messaging, suggesting that it is offering three macro layers of the GenAI stack. The bottom layer is what Amazon calls the infrastructure layer, where all the LEGO-like infrastructure components for training and inferencing sit. The second layer is where Amazon offers the tools for building AI applications with large language models (LLMs) and foundation models (FMs). The top layer is the GenAI application layer, where the vendor offers applications that are built by use of such LLMs and FMs.
Although AWS has already demonstrated some applications and has others in the works, the company is making its core message clear and staying true to its long-term message: Regardless of the technology, it will provide the necessary components—and expertise if needed—to build and run what you need, when you need it, and how you need it. Many other providers are going after the one-size-fits-all gigantic LLMs that are touted as the AI savior for any enterprise, but AWS boldly claims that one size will not fit all. In fact, its core re:Invent 2023 theme was “No one model will rule them all.”
Has AWS done enough to convince the varying demographics of GenAI users, from developers to large enterprises, to use AWS for building their GenAI solutions? Is AWS outpacing the competition when it comes to GenAI? How does its offering compare with the competition in terms of price, performance, and features? Did AWS miss the boat on GenAI? This report analyzes all of these questions and provides recommendations on the proper way to choose the right provider.