{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Step-Back Prompting (Question-Answering)\n", "\n", "You can also run this notebook online [on Noteable.io](https://app.noteable.io/published/ff04b7ba-efaf-48d5-ba4d-b6bff0359ceb).\n", "\n", "One prompting technique called \"Step-Back\" prompting can improve performance on complex questions by first asking a \"step back\" question. This can be combined with regular question-answering applications by then doing retrieval on both the original and step-back question.\n", "\n", "Read the paper [here](https://arxiv.org/abs/2310.06117)\n", "\n", "See an excellent blog post on this by Cobus Greyling [here](https://cobusgreyling.medium.com/a-new-prompt-engineering-technique-has-been-introduced-called-step-back-prompting-b00e8954cacb)\n", "\n", "In this cookbook we will replicate this technique. We modify the prompts used slightly to work better with chat models." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, define the step back prompts & examples. This will be used later when querying the model to generate a better query based on the users original question." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import {\n", " ChatPromptTemplate,\n", " FewShotChatMessagePromptTemplate,\n", "} from \"npm:langchain@0.0.177/prompts\";" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "const examples = [\n", " {\n", " input: \"Could the members of The Police perform lawful arrests?\",\n", " output: \"what can the members of The Police do?\",\n", " },\n", " {\n", " input: \"Jan Sindel's was born in what country?\",\n", " output: \"what is Jan Sindel's personal history?\",\n", " },\n", "];\n", "const examplePrompt = ChatPromptTemplate.fromMessages([\n", " [\"human\", \"{input}\"],\n", " [\"ai\", \"{output}\"],\n", "]);\n", "const fewShotPrompt = new FewShotChatMessagePromptTemplate({\n", " examplePrompt,\n", " examples,\n", " inputVariables: [], // no input variables\n", "});" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, format the few shot prompt as a string so we can pass it into the next step for the prompt template." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "const formattedFewShot = await fewShotPrompt.format({});" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "const prompt = ChatPromptTemplate.fromMessages([\n", " [\n", " \"system\",\n", " `You are an expert at world knowledge. Your task is to step back and paraphrase a question to a more generic step-back question, which is easier to answer. Here are a few examples:`,\n", " ],\n", " formattedFewShot,\n", " [\"user\", \"{question}\"],\n", "]);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Question Generator Chain\n", "\n", "Now construct the question generator using a RunnableSequence. This will be the chain which actually generates the updated query." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "import { ChatOpenAI } from \"npm:langchain@0.0.177/chat_models/openai\";\n", "import { StringOutputParser } from \"npm:langchain@0.0.177/schema/output_parser\";\n", "import { RunnableSequence } from \"npm:langchain@0.0.177/schema/runnable\";" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "// Deno.env.set(\"OPENAI_API_KEY\", \"\");\n", "\n", "const model = new ChatOpenAI({ temperature: 0 });\n", "const stringOutputParser = new StringOutputParser();" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "const questionGenerator = RunnableSequence.from([\n", " prompt,\n", " model,\n", " stringOutputParser,\n", "]);" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "const question = \"was chatgpt around while trump was president?\";" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\u001b[32m\"What is the timeline of ChatGPT's existence?\"\u001b[39m" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "await questionGenerator.invoke({ question })" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Search Retriever\n", "Next, we'll use the SerpAPI as a search tool, and turn it into a retriever using a simple wrapper." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "import { SerpAPI } from \"npm:langchain@0.0.177/tools\";" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "const search = new SerpAPI(Deno.env.get(\"SERPAPI_API_KEY\"), {\n", " num: \"4\", // Number of results\n", "});" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "const retriever = async (query: string) => search.call(query);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, pull in a predefined step-back prompt from the [LangChain Hub](https://smith.langchain.com/hub)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "import { pull } from \"npm:langchain@0.0.177/hub\";" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "// const responsePromptTemplate = `You are an expert of world knowledge. I am going to ask you a question. Your response should be comprehensive and not contradicted with the following context if they are relevant. Otherwise, ignore them if they are not relevant.\n", "\n", "// {normal_context}\n", "// {step_back_context}\n", "\n", "// Original Question: {question}\n", "// Answer:`;\n", "// const responsePrompt = ChatPromptTemplate.fromTemplate(responsePromptTemplate)\n", "const responsePrompt = await pull(\"langchain-ai/stepback-answer\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Final Chain\n", "Lastly, define our chain which will pull all the above pieces together." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "const chain = RunnableSequence.from([\n", " {\n", " normal_context: (i: { question: string }) => retriever(i.question),\n", " step_back_context: questionGenerator.pipe(retriever),\n", " question: (i: { question: string }) => i.question,\n", " },\n", " responsePrompt,\n", " model,\n", " stringOutputParser,\n", "]);" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\u001b[32m\"Yes, ChatGPT was around while Donald Trump was president. The exact timeline of ChatGPT's developmen\"\u001b[39m... 415 more characters" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "await chain.invoke({\n", " question,\n", "});" ] } ], "metadata": { "kernelspec": { "display_name": "Deno", "language": "typescript", "name": "deno" }, "language_info": { "file_extension": ".ts", "mimetype": "text/x.typescript", "name": "typescript", "nb_converter": "script", "pygments_lexer": "typescript", "version": "5.2.2" } }, "nbformat": 4, "nbformat_minor": 2 }