From the course: Enterprise AI Solutions with AWS: Amazon Q Business, Bedrock Knowledge Bases, and SageMaker MLOps
Unlock this course with a free trial
Join today to access over 24,900 courses taught by industry experts.
Diminishing scale in LLMs
From the course: Enterprise AI Solutions with AWS: Amazon Q Business, Bedrock Knowledge Bases, and SageMaker MLOps
Diminishing scale in LLMs
- [Instructor] There's a persistent myth in AI development that says, "Just add more data, more compute and performance, and we'll keep improving." But let's talk about how that's wrong using a very simple visualization of hardware scaling limits in large language models. So the idea versus reality, in this case the white dash line shows that the more data fallacy is where people assume that if you keep doubling your compute, you're going to double your performance but that's not true according to Amdahl's law. It's what investors and media imagine what's possible, but if we look at the early scaling here, so if we look at this blue, you can see that it tracks pretty well initially, but as you add more and more compute, you get diminishing returns. So the wall, in this case this is the vertical red line, is the scaling wall. And this isn't an arbitrary limit, it's a fundamental component of hardware architecture and this is called Amdahl's law. So no amount of engineering is going to…