-
-
Notifications
You must be signed in to change notification settings - Fork 426
Reconsider llm.Conversation in favor of allowing prompts to be in reply to responses #938
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I'm trying to figure out if this is a blocker for tools or not. |
If I were to do this here's one potential design:
|
I'm going to do a research spike on this in a branch. |
I think this is the migration: @migration
def m018_replies(db):
db["responses"].add_column("reply_to_id", str)
db["responses"].add_foreign_key("reply_to_id", "responses", "id")
db["responses"].transform(
column_order=(
"id",
"reply_to_id",
"model",
"prompt",
"system",
"prompt_id",
"system_id",
"schema_id",
"prompt_json",
"options_json",
"response",
"response_json",
"conversation_id",
"first_token_ms",
"duration_ms",
"datetime_utc",
"input_tokens",
"output_tokens",
"token_details",
),
) |
I guess this means the Would be neater to just to |
Also interesting: currently the llm/llm/default_plugins/openai_models.py Lines 573 to 578 in e78e1fc
A lot of those then have methods like this one: llm/llm/default_plugins/openai_models.py Lines 468 to 480 in e78e1fc
Note how In the new reply-to world that won't be necessary. Just being passed the This is great news for implementing tools, because it helps solve the thorny problem of keeping the ID from tool call requests so it can be matched up with the IDs in the tool call replies. |
This change could be a breaking change for existing plugins. That's worth thinking about - it may be possible to keep them working by detecting if their |
The Current docs: llm/docs/plugins/tutorial-model-plugin.md Line 224 in e78e1fc
|
I'm already reconsidering what |
Now that I've built this: I can try this: llm -f github:simonw/llm \
-f issue:simonw/llm/938 \
-m gemini-2.5-pro-exp-03-25 \
--system 'muse on this issue, then propose a whole bunch of code to help implement it' Gemini 2.5 Pro came up with a whole bunch of suggestions, and charged me 66.36 cents. https://gist.github.com/simonw/a5f0c1e8184f4ddc8b71b30890fe690c |
I was originally planning on implementing tools (#898) as part of
llm.Conversation
but I've been having second thoughts about that.There's a lot to be said for allowing prompts to reply to other responses, and having those form a chain.
Two big advantages:
llm.Conversation
if they are going to send prompts that follow up on other prompts - particularly useful for implementing tools.The text was updated successfully, but these errors were encountered: