-
-
Notifications
You must be signed in to change notification settings - Fork 428
Tools #898
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I think the first step here is going to be designing and adding a tool definition abstraction to the The fiddly bit will be which part of the code handles acting on those tool requests, executing the associated Python functions and triggering a follow-up prompt with the results. I had originally intended this to be a thing that I'm going to want |
Could I ask, MCP support will be contain in Tools function? |
I don't plan to implement MCP directly in LLM core, but I anticipate building a plugin that adds MCP support to LLM and builds on top of the new tools facility. That way I can iterate on the plugin as MCP itself evolves independently of LLM core. |
I stumbled on this issue just a day before learning about Anthropic's prompt caching feature, and kind of glazed over the prompt caching mention at the time. But as I found myself getting acquainted with both at the same time, it does feel like there is some affinity there. |
Hey, I came across this issue through your blog. I think I’ve built something along similar lines, though it looks like we’ve taken different approaches. My project is mostly for learning (not trying to promote anything here), feel free to take/repurpose anything that might be useful. Here are a few relevant files:
|
It would be great if this could support Human in the Loop (HITL) For example, if you give your LLM a Tool to execute commands on your computer, than I think it makes sense that you get a prompt to accept or make suggestions for the action. My use case is that I would like to make custom agents with google-adk and have a CLI interface for them. |
That's a really good call. I was going to leave that entirely up to plugins but it would make sense for the core library to include an easy "make this action a human-in-the-loop one" flag. |
Starting a new tracking issue for tool support, the next big LLM feature.
Tools will be Python functions, LLM will provide an abstraction to get those to work against different providers. This will be covered by both the Python library and the CLI tool.
Plugins will be able to provide new tools.
Previous related issues:
The relatively new Schemas feature is a useful point of reference too.
The text was updated successfully, but these errors were encountered: