Using prompt chaining for LLMs
Prompt chaining is another highly effective prompt engineering technique that helps us achieve better results from LLMs. It involves breaking down tasks into smaller, sequential steps that are more efficiently completed individually.
For instance, when implementing get_average_return
, we may want to enhance ChatGPT’s or OpenAI’s initial output by adding type hints and avoiding inline calculations in the return statement. With GitHub Copilot, we might want to construct a barebones implementation first and add a Google Style docstring later.
Although we could include all these elements in the initial prompt, it is often more natural and effective to start with an implementation that is functionally correct. From there, we can refine the code step by step through a series of follow-up prompts.
Chaining with ChatGPT
Applying chaining with ChatGPT is very intuitive given that the UI is already designed for a conversational style...