Build AI agents that actually do things.
Combine local tools and MCP servers in a single, elegant runtime. Write agents in 5 lines of code. Run them anywhere.
Instead of spending days wiring together LLMs, tools, and execution environments, Agntrick gives you a production-ready setup instantly.
- Write Less, Do More: Create a fully functional agent with just 5 lines of Python using the zero-config
@AgentRegistry.registerdecorator. - Context is King (MCP): Native integration with Model Context Protocol (MCP) servers to give your agents live data (Web search, APIs, internal databases).
- Hardcore Local Tools: Built-in blazing fast tools (
ripgrep,fd, AST parsing) so your agents can explore and understand local codebases out-of-the-box. - Stateful & Resilient: Powered by LangGraph to support memory, cyclic reasoning, and human-in-the-loop workflows.
- Docker-First Isolation: Every agent runs in isolated containersβno more "it works on my machine" when sharing with your team.
pip install agntrick
# Or with development dependencies
pip install "agntrick[dev]"git clone https://github.com/jeancsil/agntrick.git
cd agntrick
make installYou need an LLM API key to breathe life into your agents. Agntrick supports 10+ LLM providers via LangChain!
# Copy the template
cp .env.example .env
# Edit .env and paste your API key
# Choose one of the following providers:
# OPENAI_API_KEY=sk-your-key-here
# ANTHROPIC_API_KEY=sk-ant-your-key-here
# GOOGLE_API_KEY=your-google-key
# GROQ_API_KEY=gsk-your-key-here
# MISTRAL_API_KEY=your-mistral-key-here
# COHERE_API_KEY=your-cohere-key-here
# For Ollama (local), no API key needed:
# OLLAMA_BASE_URL=http://localhost:11434# List all available agents
agntrick list
# Run an agent with input
agntrick developer -i "Explain this codebase"
# Or try the learning agent with web search
agntrick learning -i "Explain quantum computing in simple terms"π Supported Environment Variables
Only one provider's API key is required. The framework auto-detects which provider to use based on available credentials.
# Anthropic (Recommended)
ANTHROPIC_API_KEY=sk-ant-your-key-here
# OpenAI
OPENAI_API_KEY=sk-your-key-here
# Google GenAI / Vertex
GOOGLE_API_KEY=your-google-key
GOOGLE_VERTEX_PROJECT_ID=your-project-id
# Mistral AI
MISTRAL_API_KEY=your-mistral-key-here
# Cohere
COHERE_API_KEY=your-cohere-key-here
# Azure OpenAI
AZURE_OPENAI_API_KEY=your-azure-key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
# AWS Bedrock
AWS_PROFILE=your-profile
# Ollama (Local, no API key needed)
OLLAMA_BASE_URL=http://localhost:11434
# Hugging Face
HUGGINGFACEHUB_API_TOKEN=your-hf-tokenπ See docs/llm-providers.md for detailed environment variable configurations and provider comparison.
Agntrick includes several pre-built agents for common use cases:
| Agent | Purpose | MCP Servers |
|---|---|---|
developer |
Code Master: Read, search & edit code | fetch |
github-pr-reviewer |
PR Reviewer: Reviews diffs, posts inline comments & summaries | - |
learning |
Tutor: Step-by-step tutorials and explanations | fetch, web-forager |
news |
News Anchor: Aggregates top stories | fetch |
youtube |
Video Analyst: Extract insights from YouTube videos | fetch |
π See docs/agents.md for detailed information about each agent.
Fast, zero-dependency tools for working with local codebases:
| Tool | Capability |
|---|---|
find_files |
Fast search via fd |
discover_structure |
Directory tree mapping |
get_file_outline |
AST signature parsing |
read_file_fragment |
Precise file reading |
code_search |
Fast search via ripgrep |
edit_file |
Safe file editing |
youtube_transcript |
Extract transcripts from YouTube videos |
π See docs/tools.md for detailed documentation of each tool.
Model Context Protocol servers for extending agent capabilities:
| Server | Purpose |
|---|---|
fetch |
Extract clean text from URLs |
web-forager |
Web search and content fetching |
kiwi-com-flight-search |
Search real-time flights |
π See docs/mcp-servers.md for details on each server and how to add custom MCP servers.
Agntrick supports 10 LLM providers out of the box, covering 90%+ of the market:
| Provider | Type | Use Case |
|---|---|---|
| Anthropic | Cloud | State-of-the-art reasoning (Claude) |
| OpenAI | Cloud | GPT-4, GPT-4.1, o1 series |
| Azure OpenAI | Cloud | Enterprise OpenAI deployments |
| Google GenAI | Cloud | Gemini models via API |
| Google Vertex AI | Cloud | Gemini models via GCP |
| Mistral AI | Cloud | European privacy-focused models |
| Cohere | Cloud | Enterprise RAG and Command models |
| AWS Bedrock | Cloud | Anthropic, Titan, Meta via AWS |
| Ollama | Local | Run LLMs locally (zero API cost) |
| Hugging Face | Cloud | Open models from Hugging Face Hub |
π See docs/llm-providers.md for detailed setup instructions.
from agntrick import AgentBase, AgentRegistry
@AgentRegistry.register("my-agent", mcp_servers=["fetch"])
class MyAgent(AgentBase):
@property
def system_prompt(self) -> str:
return "You are my custom agent with the power to fetch websites."Boom. Run it instantly:
agntrick my-agent -i "Summarize https://example.com"Want to add your own Python logic? Easy.
from langchain_core.tools import StructuredTool
from agntrick import AgentBase, AgentRegistry
@AgentRegistry.register("data-processor")
class DataProcessorAgent(AgentBase):
@property
def system_prompt(self) -> str:
return "You process data files like a boss."
def local_tools(self) -> list:
return [
StructuredTool.from_function(
func=self.process_csv,
name="process_csv",
description="Process a CSV file path",
)
]
def process_csv(self, filepath: str) -> str:
# Magic happens here β¨
return f"Successfully processed {filepath}!"Agntrick can be configured via a .agntrick.yaml file in your project root or home directory:
# .agntrick.yaml
llm:
provider: anthropic # or openai, google, ollama, etc.
model: claude-sonnet-4-6 # optional model override
temperature: 0.7
mcp:
servers:
- fetch
- web-forager
logging:
level: INFO
file: logs/agent.logCommand your agents directly from the terminal.
# π List all registered agents
agntrick list
# π΅οΈ Get detailed info about what an agent can do
agntrick info developer
# π Run an agent with input
agntrick developer -i "Analyze the architecture of this project"
# β±οΈ Run with an execution timeout (seconds)
agntrick developer -i "Refactor this module" -t 120
# π Run with debug-level verbosity
agntrick developer -i "Hello" -v
# π View logs
tail -f logs/agent.logUnder the hood, we seamlessly bridge the gap between user intent and execution:
flowchart TB
subgraph User [π€ User Space]
Input[User Input]
end
subgraph CLI [π» CLI - agntrick]
Typer[Typer Interface]
end
subgraph Registry [π Registry]
AR[AgentRegistry]
AD[Auto-discovery]
end
subgraph Agents [π€ Agents]
Dev[developer agent]
Learning[learning agent]
News[news agent]
end
subgraph Core [π§ Core Engine]
AB[AgentBase]
LG[LangGraph Runtime]
CP[(Checkpointing)]
end
subgraph Tools [π§° Tools & Skills]
LT[Local Tools]
MCP[MCP Tools]
end
subgraph External [π External World]
LLM[LLM API]
MCPS[MCP Servers]
end
Input --> Typer
Typer --> AR
AR --> AD
AR -->|Routes to| Dev & Learning & News
Dev & Learning & News -->|Inherits from| AB
AB --> LG
LG <--> CP
AB -->|Uses| LT
AB -->|Uses| MCP
LT -->|Reasoning| LLM
MCP -->|Queries| MCPS
MCPS -->|Provides Data| LLM
LLM --> Output[Final Response]
System Requirements & Setup
Requirements:
- Python 3.12+
uvpackage managerripgrep,fd,fzf(for local tools)
# Install dependencies (blazingly fast with uv β‘)
make install
# Run the test suite
make test
# Run agents directly
agntrick developer -i "Hello"Useful `make` Commands
make install # Install dependencies with uv
make test # Run pytest with coverage
make format # Auto-format codebase with ruff
make check # Strict linting (mypy + ruff)
make build # Build wheel and sdist packages
make build-clean # Remove build artifactsπ¦ Release Commands
Automated release commands for publishing to PyPI:
# Release core agntrick package
make release VERSION=0.3.0
# Release agntrick-whatsapp package
make release-whatsapp VERSION=0.4.0
# Release both packages with different versions
make release-both CORE=0.3.0 WHATSAPP=0.4.0π See RELEASING.md for complete release documentation, troubleshooting, and manual release procedures.
We love contributions! Check out our AGENTS.md for development guidelines.
For maintainers: See RELEASING.md for how to publish new versions to PyPI.
The Golden Rules:
make checkshould pass without complaints.make testshould stay green.- Don't drop test coverage (we like our 80% mark!).
This project is licensed under the MIT License. See LICENSE for details.
Stand on the shoulders of giants:
If you find this useful, please consider giving it a β or buying me a coffee!
Β