Spotlight
Amazon Q: GenAI a Feature or a System?
Identifying where challenges and advantages exist in the quest for immediate value in Generative AI.
This week, AWS released Anthropic’s industry-leading, foundational model, Claude3 Opus as publicly available on Amazon Bedrock. This much-anticipated announcement comes after AWS’s groundbreaking $4 billion investment in Anthropic and its Generative AI (GenAI) capabilities.
The release gives AWS an edge over other model hosting providers or at least puts them in the same echelon as the very best. According to standard benchmarks, Claude3 Opus is the industry-leading foundational model when it comes to overall intelligence. Below are some of these standard benchmarks, taken from Anthropic’s official announcement. See note at the end of this post for one callout that needs to be made around GPT4-Turbo’s omission1.
Image Source: https://www.anthropic.com/news...
It is often the case that the top-performing model a hosting provider offers carries greater weight than other models offered. This is because, when approaching a GenAI workload, it is often most effective/efficient to start with the state-of-the-art foundational model, see if it can address your problem, and once that has been confirmed, only then optimize to more efficient models, shorter prompts, etc. Andrej Karpathy gave a more detailed description of this approach in his State of GPT Talk last year.
Claude3 Opus will not be the industry standard forever, also probably not even for very long. Sam Altman has publicly stated that he expects there to be an iteration between GPT4 and GPT5 (most likely GPT4.5) this year, and more recent speculation says it could be as soon as June of this year. See OpenAI’s Community Forum on the related, potential leak. This is not exclusive to OpenAI and more generally, the graphs shared below demonstrate the trend of growing development around GenAI technology. While the rapid development around GenAI is only going to increase, I think we can be confident AWS is serious about GenAI, is making the investments to rapidly update in this LLM arms race we find ourselves in, and will stay in that top tier echelon of model hosting providers. We certainly will continue to monitor the landscape.
Images Source: A Comprehensive Overview of Large Language Models
AWS’s guarantee to keep customer data their own (your GenAI workloads are not used to train models or for other purposes–data in your AWS account stays in your account), their full suite/ecosystem of native tooling/services, and their position of hosting state-of-the-art foundational models, make Amazon Bedrock a very compelling choice for hosting your foundational models over other major providers.
Identifying where challenges and advantages exist in the quest for immediate value in Generative AI.