Summary
In this chapter, we explored how to achieve desirable outcomes from LLMs by effectively applying CoT and chaining for coding tasks with an extended scope.
With CoT prompting, we saw how introducing reasoning steps into our prompts enables the model to handle more nuanced challenges, such as implementing a geometric mean function that supports negative net returns. We used function names as intermediate reasoning steps, while relying on Copilot, ChatGPT, and OpenAI API to fill in the implementation details.
Through chaining, we began with an initial implementation that is functionally correct and iteratively improved by adding type hints and refining docstrings. When using OpenAI API, we introduced a selective history approach to make chaining more efficient, which still holds as the chain of tasks gets longer.
In the next chapter, we will delve deeper into refactoring code with GenAI applications. Later in the book, we will introduce advanced prompt engineering...