autogen_ext.memory.canvas#
- class TextCanvas[source]#
Bases:
BaseCanvas
An in‑memory canvas that stores text files with full revision history.
Warning
This is an experimental API and may change in the future.
Besides the original CRUD‑like operations, this enhanced implementation adds:
apply_patch – applies patches using the
unidiff
library for accurate hunk application and context line validation.get_revision_content – random access to any historical revision.
get_revision_diffs – obtain the list of diffs applied between every consecutive pair of revisions so that a caller can replay or audit the full change history.
- add_or_update_file(filename: str, new_content: str | bytes | Any) None [source]#
Create filename or append a new revision containing new_content.
- apply_patch(filename: str, patch_data: str | bytes | Any) None [source]#
Apply patch_text (unified diff) to the latest revision and save a new revision.
Uses the unidiff library to accurately apply hunks and validate context lines.
- get_all_contents_for_context() str [source]#
Return a summarised view of every file and its latest revision.
- get_diff(filename: str, from_revision: int, to_revision: int) str [source]#
Return a unified diff between from_revision and to_revision.
- get_latest_content(filename: str) str [source]#
Return the most recent content or an empty string if the file is new.
- get_revision_content(filename: str, revision: int) str [source]#
Return the exact content stored in revision.
If the revision does not exist an empty string is returned so that downstream code can handle the “not found” case without exceptions.
- class TextCanvasMemory(canvas: TextCanvas | None = None)[source]#
Bases:
Memory
A memory implementation that uses a Canvas for storing file-like content. Inserts the current state of the canvas into the ChatCompletionContext on each turn.
Warning
This is an experimental API and may change in the future.
The TextCanvasMemory provides a persistent, file-like storage mechanism that can be used by agents to read and write content. It automatically injects the current state of all files in the canvas into the model context before each inference.
This is particularly useful for: - Allowing agents to create and modify documents over multiple turns - Enabling collaborative document editing between multiple agents - Maintaining persistent state across conversation turns - Working with content too large to fit in a single message
The canvas provides tools for: - Creating or updating files with new content - Applying patches (unified diff format) to existing files
Examples
Example: Using TextCanvasMemory with an AssistantAgent
The following example demonstrates how to create a TextCanvasMemory and use it with an AssistantAgent to write and update a story file.
import asyncio from autogen_core import CancellationToken from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.messages import TextMessage from autogen_ext.memory.canvas import TextCanvasMemory async def main(): # Create a model client model_client = OpenAIChatCompletionClient( model="gpt-4o", # api_key = "your_openai_api_key" ) # Create the canvas memory text_canvas_memory = TextCanvasMemory() # Get tools for working with the canvas update_file_tool = text_canvas_memory.get_update_file_tool() apply_patch_tool = text_canvas_memory.get_apply_patch_tool() # Create an agent with the canvas memory and tools writer_agent = AssistantAgent( name="Writer", model_client=model_client, description="A writer agent that creates and updates stories.", system_message=''' You are a Writer Agent. Your focus is to generate a story based on the user's request. Instructions for using the canvas: - The story should be stored on the canvas in a file named "story.md". - If "story.md" does not exist, create it by calling the 'update_file' tool. - If "story.md" already exists, generate a unified diff (patch) from the current content to the new version, and call the 'apply_patch' tool to apply the changes. IMPORTANT: Do not include the full story text in your chat messages. Only write the story content to the canvas using the tools. ''', tools=[update_file_tool, apply_patch_tool], memory=[text_canvas_memory], ) # Send a message to the agent await writer_agent.on_messages( [TextMessage(content="Write a short story about a bunny and a sunflower.", source="user")], CancellationToken(), ) # Retrieve the content from the canvas story_content = text_canvas_memory.canvas.get_latest_content("story.md") print("Story content from canvas:") print(story_content) if __name__ == "__main__": asyncio.run(main())
Example: Using TextCanvasMemory with multiple agents
The following example shows how to use TextCanvasMemory with multiple agents collaborating on the same document.
import asyncio from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.conditions import TextMentionTermination from autogen_ext.memory.canvas import TextCanvasMemory async def main(): # Create a model client model_client = OpenAIChatCompletionClient( model="gpt-4o", # api_key = "your_openai_api_key" ) # Create the shared canvas memory text_canvas_memory = TextCanvasMemory() update_file_tool = text_canvas_memory.get_update_file_tool() apply_patch_tool = text_canvas_memory.get_apply_patch_tool() # Create a writer agent writer_agent = AssistantAgent( name="Writer", model_client=model_client, description="A writer agent that creates stories.", system_message="You write children's stories on the canvas in story.md.", tools=[update_file_tool, apply_patch_tool], memory=[text_canvas_memory], ) # Create a critique agent critique_agent = AssistantAgent( name="Critique", model_client=model_client, description="A critique agent that provides feedback on stories.", system_message="You review the story.md file and provide constructive feedback.", memory=[text_canvas_memory], ) # Create a team with both agents team = RoundRobinGroupChat( participants=[writer_agent, critique_agent], termination_condition=TextMentionTermination("TERMINATE"), max_turns=10, ) # Run the team on a task await team.run(task="Create a children's book about a bunny and a sunflower") # Get the final story story = text_canvas_memory.canvas.get_latest_content("story.md") print(story) if __name__ == "__main__": asyncio.run(main())
- async add(content: MemoryContent, cancellation_token: CancellationToken | None = None) None [source]#
Example usage: Possibly interpret content as a patch or direct file update. Could also be done by a specialized “CanvasTool” instead.
- get_apply_patch_tool() ApplyPatchTool [source]#
Returns an ApplyPatchTool instance that works with this memory’s canvas.
- get_update_file_tool() UpdateFileTool [source]#
Returns an UpdateFileTool instance that works with this memory’s canvas.
- async query(query: str | MemoryContent, cancellation_token: CancellationToken | None = None, **kwargs: Any) MemoryQueryResult [source]#
Potentially search for matching filenames or file content. This example returns empty.
- async update_context(model_context: ChatCompletionContext) UpdateContextResult [source]#
Inject the entire canvas summary (or a selected subset) as reference data. Here, we just put it into a system message, but you could customize.