A powerful, flexible virtual filesystem library for Python with advanced features, multiple storage providers, and robust security.
🎯 Perfect for MCP Servers: Expose virtual filesystems to Claude Desktop and other MCP clients via FUSE mounting. Generate code, mount it, and let Claude run real tools (TypeScript, linters, compilers) on it with full POSIX semantics.
Make your virtual filesystem the "OS for tools" - Mount per-session workspaces via FUSE and let Claude / MCP clients run real tools on AI-generated content.
- ✅ Real tools, virtual filesystem: TypeScript, ESLint, Prettier,
tsc, pytest, etc. work seamlessly - ✅ Full POSIX semantics: Any command-line tool that expects a real filesystem works
- ✅ Pluggable backends: Memory, S3, SQLite, E2B, or custom providers
- ✅ Perfect for MCP servers: Expose workspaces to Claude Desktop and other MCP clients
- ✅ Zero-copy streaming: Handle large files efficiently with progress tracking
Example workflow:
- Your MCP server creates a
VirtualFileSystemwith AI-generated code - Mount it via FUSE at
/tmp/workspace - Claude runs
tsc /tmp/workspace/main.tsor any other tool - Read results back and iterate
See MCP Use Cases for detailed examples and Architecture for how it all fits together.
┌─────────────────────────────────────────────────────────────┐
│ Your MCP Server / AI App │
└────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ chuk-virtual-fs (This Library) │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ VirtualFileSystem (Core API) │ │
│ │ • mkdir, write_file, read_file, ls, cp, mv, etc. │ │
│ │ • Streaming operations (large files) │ │
│ │ • Virtual mounts (combine providers) │ │
│ └────────────┬─────────────────────────────────────────┘ │
│ │ │
│ ┌────────┴────────┐ │
│ ▼ ▼ │
│ ┌────────┐ ┌──────────┐ │
│ │ WebDAV │ │ FUSE │ ◄── Mounting Adapters │
│ │Adapter │ │ Adapter │ │
│ └────┬───┘ └────┬─────┘ │
└───────┼───────────────┼──────────────────────────────────────┘
│ │
│ │
▼ ▼
┌────────────┐ ┌─────────────┐
│ WebDAV │ │ /tmp/mount │ ◄── Real OS Mounts
│ Server │ │ (FUSE) │
│ :8080 │ │ │
└──────┬─────┘ └──────┬──────┘
│ │
│ │
▼ ▼
┌───────────────────────────────────┐
│ Real Tools & Applications │
│ • Finder/Explorer (WebDAV) │
│ • TypeScript (tsc) │
│ • Linters (ESLint, Ruff) │
│ • Any POSIX tool (ls, cat, etc.) │
└───────────────────────────────────┘
│
│
▼
┌───────────────────────────────────┐
│ Storage Backends (Providers) │
│ • Memory • SQLite • S3 │
│ • E2B • Filesystem │
└───────────────────────────────────┘
Key Points:
- Single API: Use VirtualFileSystem regardless of backend
- Multiple Backends: Memory, SQLite, S3, E2B, or custom providers
- Two Mount Options: WebDAV (quick) or FUSE (full POSIX)
- Real Tools Work: Once mounted, any tool can access your virtual filesystem
- Pluggable storage providers
- Flexible filesystem abstraction
- Supports multiple backend implementations
- Memory Provider: In-memory filesystem for quick testing and lightweight use
- SQLite Provider: Persistent storage with SQLite database backend
- Pyodide Provider: Web browser filesystem integration
- S3 Provider: Cloud storage with AWS S3 or S3-compatible services
- E2B Sandbox Provider: Remote sandbox environment filesystem
- Google Drive Provider: Store files in user's Google Drive (user owns data!)
- Easy to extend with custom providers
- Multiple predefined security profiles
- Customizable access controls
- Path and file type restrictions
- Quota management
- Security violation tracking
- Streaming Operations: Memory-efficient streaming for large files with:
- Real-time progress tracking callbacks
- Atomic write safety (temp file + atomic move)
- Automatic error recovery and cleanup
- Support for both sync and async callbacks
- Virtual Mounts: Unix-like mounting system to combine multiple providers
- WebDAV Mounting: Expose virtual filesystems via WebDAV (no kernel extensions!)
- Mount in macOS Finder, Windows Explorer, or Linux file managers
- Perfect for AI coding assistants and development workflows
- Background server support
- Read-only mode option
- FUSE Mounting: Native filesystem mounting with full POSIX semantics
- Mount virtual filesystems as real directories
- Works with any tool that expects a filesystem
- Docker support for testing without system modifications
- Snapshot and versioning support
- Template-based filesystem setup
- Flexible path resolution
- Comprehensive file and directory operations
- CLI tools for bucket management
pip install chuk-virtual-fs# Install with S3 support
pip install "chuk-virtual-fs[s3]"
# Install with Google Drive support
pip install "chuk-virtual-fs[google_drive]"
# Install with Git support
pip install "chuk-virtual-fs[git]"
# Install with WebDAV mounting support (recommended!)
pip install "chuk-virtual-fs[webdav]"
# Install with FUSE mounting support
pip install "chuk-virtual-fs[mount]"
# Install everything
pip install "chuk-virtual-fs[all]"
# Using uv
uv pip install "chuk-virtual-fs[s3]"
uv pip install "chuk-virtual-fs[google_drive]"
uv pip install "chuk-virtual-fs[git]"
uv pip install "chuk-virtual-fs[webdav]"
uv pip install "chuk-virtual-fs[mount]"
uv pip install "chuk-virtual-fs[all]"# Clone the repository
git clone https://github.com/chrishayuk/chuk-virtual-fs.git
cd chuk-virtual-fs
# Install in development mode with all dependencies
pip install -e ".[dev,s3,e2b]"
# Using uv
uv pip install -e ".[dev,s3,e2b]"Try the interactive example runner:
cd examples
./run_example.sh # Interactive menu with 11 examplesOr run specific examples:
- WebDAV:
./run_example.sh 1(Basic server) - FUSE:
./run_example.sh 5(Docker mount test) - Providers:
./run_example.sh 7(Memory provider)
See: examples/ for comprehensive documentation
The library uses async/await for all operations:
from chuk_virtual_fs import AsyncVirtualFileSystem
import asyncio
async def main():
# Use async context manager
async with AsyncVirtualFileSystem(provider="memory") as fs:
# Create directories
await fs.mkdir("/home/user/documents")
# Write to a file
await fs.write_file("/home/user/documents/hello.txt", "Hello, Virtual World!")
# Read from a file
content = await fs.read_text("/home/user/documents/hello.txt")
print(content) # Outputs: Hello, Virtual World!
# List directory contents
files = await fs.ls("/home/user/documents")
print(files) # Outputs: ['hello.txt']
# Change directory
await fs.cd("/home/user/documents")
print(fs.pwd()) # Outputs: /home/user/documents
# Copy and move operations
await fs.cp("hello.txt", "hello_copy.txt")
await fs.mv("hello_copy.txt", "/home/user/hello_moved.txt")
# Find files matching pattern
results = await fs.find("*.txt", path="/home", recursive=True)
print(results) # Finds all .txt files under /home
# Run the async function
asyncio.run(main())Note: The library also provides a synchronous
VirtualFileSystemalias for backward compatibility, but the async API (AsyncVirtualFileSystem) is recommended for new code and required for streaming and mount operations.
If it looks like storage, we can probably wrap it as a provider.
Our providers are organized into logical families:
- 🧠 In-Memory & Local: Memory, SQLite, Filesystem - Fast, local-first storage
- ☁️ Cloud Object Stores: S3 (AWS, MinIO, Tigris, etc.) - Scalable blob storage
- 👤 Cloud Sync (User-Owned): Google Drive - User owns data, OAuth-based
- 🌐 Browser & Web: Pyodide - WebAssembly / browser environments
- 🔒 Remote Sandboxes: E2B - Isolated execution environments
- 🔌 Network Access: WebDAV, FUSE mounts - Make any provider accessible as a real filesystem
| Provider | Read | Write | Streaming | Mount | OAuth | Multi-Tenant | Best For |
|---|---|---|---|---|---|---|---|
| Memory | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | Testing, caching, temporary workspaces |
| SQLite | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | Persistent local storage, small datasets |
| Filesystem | ✅ | ✅ | ✅ | ✅ | ❌ | Local dev, direct file access | |
| S3 | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | Cloud storage, large files, CDN integration |
| Google Drive | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | User-owned data, cross-device sync, sharing |
| Git | ✅ | ❌ | ✅ | ❌ | ✅ | Code review, MCP devboxes, version control | |
| Pyodide | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | Browser apps, WASM environments |
| E2B | ✅ | ✅ | ✅ | ✅ | 🔑 | ✅ | Sandboxed code execution, AI agents |
Legend:
- ✅ Fully supported
⚠️ Possible with caveats (e.g., Git Write only in worktree mode)- ❌ Not supported
- 🔑 API key required
The virtual filesystem supports multiple storage providers:
- Memory: In-memory storage (default)
- SQLite: SQLite database storage
- Filesystem: Direct filesystem access
- S3: AWS S3 or S3-compatible storage
- Google Drive: User's Google Drive (user owns data!)
- Git: Git repositories (snapshot or worktree modes)
- Pyodide: Native integration with Pyodide environment
- E2B: E2B Sandbox environments
We're exploring additional providers based on demand. Candidates include:
Cloud Sync Family:
- OneDrive/SharePoint: Enterprise cloud storage, OAuth-based
- Dropbox: Personal/creator cloud storage
- Box: Enterprise content management
Archive Formats:
- ZIP/TAR providers: Mount archives as virtual directories
- OLE/OpenXML: Access Office documents as filesystems
Advanced Patterns:
- Encrypted provider: Transparent encryption wrapper for any backend
- Caching provider: Multi-tier caching (memory → SQLite → S3)
- Multi-provider: Automatic sharding across backends
Want a specific provider? Open an issue with your use case!
The S3 provider allows you to use AWS S3 or S3-compatible storage (like Tigris Storage) as the backend for your virtual filesystem.
# Install with S3 support
pip install "chuk-virtual-fs[s3]"
# Or with uv
uv pip install "chuk-virtual-fs[s3]"Create a .env file with your S3 credentials:
# AWS credentials for S3 provider
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=us-east-1
# For S3-compatible storage (e.g., Tigris Storage)
AWS_ENDPOINT_URL_S3=https://your-endpoint.example.com
S3_BUCKET_NAME=your-bucket-namefrom dotenv import load_dotenv
from chuk_virtual_fs import VirtualFileSystem
# Load environment variables
load_dotenv()
# Create filesystem with S3 provider
fs = VirtualFileSystem("s3",
bucket_name="your-bucket-name",
prefix="your-prefix", # Optional namespace in bucket
endpoint_url="https://your-endpoint.example.com") # For S3-compatible storage
# Use the filesystem as normal
fs.mkdir("/projects")
fs.write_file("/projects/notes.txt", "Virtual filesystem backed by S3")
# List directory contents
print(fs.ls("/projects"))import os
from dotenv import load_dotenv
# Load E2B API credentials from .env file
load_dotenv()
# Ensure E2B API key is set
if not os.getenv("E2B_API_KEY"):
raise ValueError("E2B_API_KEY must be set in .env file")
from chuk_virtual_fs import VirtualFileSystem
# Create a filesystem in an E2B sandbox
# API key will be automatically used from environment variables
fs = VirtualFileSystem("e2b", root_dir="/home/user/sandbox")
# Create project structure
fs.mkdir("/projects")
fs.mkdir("/projects/python")
# Write a Python script
fs.write_file("/projects/python/hello.py", 'print("Hello from E2B sandbox!")')
# List directory contents
print(fs.ls("/projects/python"))
# Execute code in the sandbox (if supported)
if hasattr(fs.provider, 'sandbox') and hasattr(fs.provider.sandbox, 'run_code'):
result = fs.provider.sandbox.run_code(
fs.read_file("/projects/python/hello.py")
)
print(result.logs)To use the E2B Sandbox Provider, you need to:
-
Install the E2B SDK:
pip install e2b-code-interpreter
-
Create a
.envfile in your project root:E2B_API_KEY=your_e2b_api_key_here -
Make sure to add
.envto your.gitignoreto keep credentials private.
Note: You can obtain an E2B API key from the E2B platform.
The Google Drive provider lets you store files in the user's own Google Drive. This approach offers unique advantages:
- ✅ User Owns Data: Files are stored in the user's Google Drive, not your infrastructure
- ✅ Natural Discoverability: Users can view/edit files directly in Google Drive UI
- ✅ Built-in Sharing: Use Drive's native sharing and collaboration features
- ✅ Cross-Device Sync: Files automatically sync across all user devices
- ✅ No Infrastructure Cost: No need to manage storage servers or buckets
# Install with Google Drive support
pip install "chuk-virtual-fs[google_drive]"
# Or with uv
uv pip install "chuk-virtual-fs[google_drive]"Before using the Google Drive provider, you need to set up OAuth2 credentials:
Step 1: Create Google Cloud Project
- Go to Google Cloud Console
- Create a new project (or select existing)
- Enable the Google Drive API
- Go to "Credentials" → Create OAuth 2.0 Client ID
- Choose "Desktop app" as application type
- Download the JSON file and save as
client_secret.json
Step 2: Run OAuth Setup
# Run the OAuth setup helper
python examples/providers/google_drive_oauth_setup.py
# Or with custom client secrets file
python examples/providers/google_drive_oauth_setup.py --client-secrets /path/to/client_secret.jsonThis will:
- Open a browser for Google authorization
- Save credentials to
google_drive_credentials.json - Show you the configuration for Claude Desktop / MCP servers
import json
from pathlib import Path
from chuk_virtual_fs import AsyncVirtualFileSystem
# Load credentials from OAuth setup
with open("google_drive_credentials.json") as f:
credentials = json.load(f)
# Create filesystem with Google Drive provider
async with AsyncVirtualFileSystem(
provider="google_drive",
credentials=credentials,
root_folder="CHUK", # Creates /CHUK/ folder in Drive
cache_ttl=60 # Cache file IDs for 60 seconds
) as fs:
# Create project structure
await fs.mkdir("/projects/demo")
# Write files - they appear in Google Drive!
await fs.write_file(
"/projects/demo/README.md",
"# My Project\n\nFiles stored in Google Drive!"
)
# Read files back
content = await fs.read_file("/projects/demo/README.md")
# List directory
files = await fs.ls("/projects/demo")
# Get file metadata
info = await fs.get_node_info("/projects/demo/README.md")
print(f"Size: {info.size} bytes")
print(f"Modified: {info.modified_at}")
# Files are now in Google Drive under /CHUK/projects/demo/After running OAuth setup, add to your claude_desktop_config.json:
{
"mcpServers": {
"vfs": {
"command": "uvx",
"args": ["chuk-virtual-fs"],
"env": {
"VFS_PROVIDER": "google_drive",
"GOOGLE_DRIVE_CREDENTIALS": "{\"token\": \"...\", \"refresh_token\": \"...\", ...}"
}
}
}
}(The OAuth setup helper generates the complete configuration)
- Two-Level Caching: Path→file_id and file_id→metadata caches for performance
- Metadata Storage: Session IDs, custom metadata, and tags stored in Drive's
appProperties - Async Operations: Full async/await support using
asyncio.to_thread - Standard Operations: All VirtualFileSystem methods work (mkdir, write_file, read_file, ls, etc.)
- Statistics: Track API calls, cache hits/misses with
get_storage_stats()
from chuk_virtual_fs.providers import GoogleDriveProvider
provider = GoogleDriveProvider(
credentials=credentials_dict, # OAuth2 credentials
root_folder="CHUK", # Root folder name in Drive
cache_ttl=60, # Cache TTL in seconds (default: 60)
session_id="optional_session_id", # Optional session tracking
sandbox_id="default" # Optional sandbox tracking
)See the examples/providers/ directory for complete examples:
google_drive_oauth_setup.py: Interactive OAuth2 setup helpergoogle_drive_example.py: Comprehensive end-to-end example
Run the full example:
# First, set up OAuth credentials
python examples/providers/google_drive_oauth_setup.py
# Then run the example
python examples/providers/google_drive_example.py- OAuth2 Authentication: Uses Google's OAuth2 flow for secure authorization
- Root Folder: Creates a folder (default:
CHUK) in the user's Drive as the filesystem root - Path Mapping: Virtual paths like
/projects/demo/file.txt→CHUK/projects/demo/file.txtin Drive - Metadata: Custom metadata (session_id, tags, etc.) stored in Drive's
appProperties - Caching: Two-level cache reduces API calls for better performance
Perfect for:
- User-Owned Workspaces: Give users their own persistent workspace in their Drive
- Collaborative AI Projects: Users can share their Drive folders with collaborators
- Long-Term Storage: User controls retention and can access files outside your app
- Cross-Device Access: Users access their files from any device with Drive
- Zero Infrastructure: No need to run storage servers or manage buckets
The Git provider lets you mount Git repositories as virtual filesystems with two modes:
- ✅ snapshot: Read-only view of a repository at a specific commit/branch/tag
- ✅ worktree: Writable working directory with full Git operations (commit, push, pull)
Perfect for:
- MCP Servers: "Mount this repo for Claude to review" - instant read-only access to any commit
- Code Review Tools: Browse repository state at specific commits
- AI Coding Workflows: Clone → modify → commit → push workflows
- Documentation: Browse repos without cloning to disk
- Version Control Integration: Full Git operations from your virtual filesystem
# Install with Git support
pip install "chuk-virtual-fs[git]"
# Or with uv
uv pip install "chuk-virtual-fs[git]"Perfect for code review, documentation browsing, or MCP servers:
from chuk_virtual_fs import AsyncVirtualFileSystem
# Mount a repository snapshot at a specific commit/branch
async with AsyncVirtualFileSystem(
provider="git",
repo_url="https://github.com/user/repo", # Or local path
mode="snapshot",
ref="main", # Branch, tag, or commit SHA
depth=1 # Optional: shallow clone for faster performance
) as fs:
# Read-only access to repository files
readme = await fs.read_text("/README.md")
files = await fs.ls("/src")
code = await fs.read_text("/src/main.py")
# Get repository metadata
metadata = await fs.get_metadata("/")
print(f"Commit: {metadata['commit_sha']}")
print(f"Author: {metadata['commit_author']}")
print(f"Message: {metadata['commit_message']}")MCP Server Use Case:
# MCP tool for code review
@mcp.tool()
async def review_code_at_commit(repo_url: str, commit_sha: str):
"""Claude reviews code at a specific commit."""
async with AsyncVirtualFileSystem(
provider="git",
repo_url=repo_url,
mode="snapshot",
ref=commit_sha
) as fs:
# Claude can now read any file in the repo
files = await fs.find("*.py", recursive=True)
# Analyze, review, suggest improvements...
return {"files_reviewed": len(files)}Full Git operations for AI coding workflows:
from chuk_virtual_fs import AsyncVirtualFileSystem
# Writable working directory
async with AsyncVirtualFileSystem(
provider="git",
repo_url="/path/to/repo", # Local repo or clone URL
mode="worktree",
branch="feature-branch" # Branch to work on
) as fs:
# Create/modify files
await fs.mkdir("/src/new_feature")
await fs.write_file(
"/src/new_feature/module.py",
"def new_feature():\\n pass\\n"
)
# Commit changes
provider = fs.provider
await provider.commit(
"Add new feature module",
author="AI Agent <[email protected]>"
)
# Push to remote
await provider.push("origin", "feature-branch")
# Check Git status
status = await provider.get_status()
print(f"Clean: {not status['is_dirty']}")- Two Modes: snapshot (read-only) or worktree (full Git operations)
- Remote & Local: Clone from GitHub/GitLab or use local repositories
- Shallow Clones: Use
depth=1for faster clones - Sparse Checkout: Clone only specific paths (coming soon)
- Full Git Operations (worktree mode):
commit(): Commit changes with custom authorpush(): Push to remotepull(): Pull from remoteget_status(): Check working directory status
- Metadata Access: Get commit SHA, author, message, date
- Temporary Clones: Auto-cleanup of temporary clone directories
from chuk_virtual_fs.providers import GitProvider
provider = GitProvider(
repo_url="https://github.com/user/repo", # Remote URL or local path
mode="snapshot", # "snapshot" or "worktree"
ref="main", # For snapshot: branch/tag/SHA
branch="main", # For worktree: branch to check out
clone_dir="/path/to/clone", # Optional: where to clone (default: temp)
depth=1, # Optional: shallow clone depth
)See examples/providers/git_provider_example.py for comprehensive examples including:
- Snapshot mode for read-only access
- Worktree mode with commit/push
- MCP server code review use case
# Run the example
uv run python examples/providers/git_provider_example.pyFor MCP Servers:
- Mount any GitHub repo for Claude to review
- Instant read-only access to specific commits
- No disk space used (temporary clones auto-cleanup)
For AI Coding:
- Clone → modify → commit → push workflows
- Full version control integration
- Author attribution for AI-generated commits
For Code Analysis:
- Browse repository history
- Compare files across commits
- Extract code examples from any version
The virtual filesystem provides robust security features to protect against common vulnerabilities and limit resource usage.
from chuk_virtual_fs import VirtualFileSystem
# Create a filesystem with strict security
fs = VirtualFileSystem(
security_profile="strict",
security_max_file_size=1024 * 1024, # 1MB max file size
security_allowed_paths=["/home", "/tmp"]
)
# Attempt to write to a restricted path
fs.write_file("/etc/sensitive", "This will fail")
# Get security violations
violations = fs.get_security_violations()- default: Standard security with moderate restrictions
- strict: High security with tight constraints
- readonly: Completely read-only, no modifications allowed
- untrusted: Highly restrictive environment for untrusted code
- testing: Relaxed security for development and testing
- File size and total storage quotas
- Path traversal protection
- Deny/allow path and pattern rules
- Security violation logging
- Read-only mode
The package includes a CLI tool for managing S3 buckets:
# List all buckets
python s3_bucket_cli.py list
# Create a new bucket
python s3_bucket_cli.py create my-bucket
# Show bucket information
python s3_bucket_cli.py info my-bucket --show-top 5
# List objects in a bucket
python s3_bucket_cli.py ls my-bucket --prefix data/
# Clear all objects in a bucket or prefix
python s3_bucket_cli.py clear my-bucket --prefix tmp/
# Delete a bucket (must be empty)
python s3_bucket_cli.py delete my-bucket
# Copy objects between buckets or prefixes
python s3_bucket_cli.py copy source-bucket dest-bucket --source-prefix data/ --dest-prefix backup/Create and restore filesystem snapshots:
from chuk_virtual_fs import VirtualFileSystem
from chuk_virtual_fs.snapshot_manager import SnapshotManager
fs = VirtualFileSystem()
snapshot_mgr = SnapshotManager(fs)
# Create initial content
fs.mkdir("/home/user")
fs.write_file("/home/user/file.txt", "Original content")
# Create a snapshot
snapshot_id = snapshot_mgr.create_snapshot("initial_state", "Initial filesystem setup")
# Modify content
fs.write_file("/home/user/file.txt", "Modified content")
fs.write_file("/home/user/new_file.txt", "New file")
# List available snapshots
snapshots = snapshot_mgr.list_snapshots()
for snap in snapshots:
print(f"{snap['name']}: {snap['description']}")
# Restore to initial state
snapshot_mgr.restore_snapshot("initial_state")
# Verify restore
print(fs.read_file("/home/user/file.txt")) # Outputs: Original content
print(fs.get_node_info("/home/user/new_file.txt")) # Outputs: None
# Export a snapshot
snapshot_mgr.export_snapshot("initial_state", "/tmp/snapshot.json")Load filesystem structures from templates:
from chuk_virtual_fs import VirtualFileSystem
from chuk_virtual_fs.template_loader import TemplateLoader
fs = VirtualFileSystem()
template_loader = TemplateLoader(fs)
# Define a template
project_template = {
"directories": [
"/projects/app",
"/projects/app/src",
"/projects/app/docs"
],
"files": [
{
"path": "/projects/app/README.md",
"content": "# ${project_name}\n\n${project_description}"
},
{
"path": "/projects/app/src/main.py",
"content": "def main():\n print('Hello from ${project_name}!')"
}
]
}
# Apply the template with variables
template_loader.apply_template(project_template, variables={
"project_name": "My App",
"project_description": "A sample project created with the virtual filesystem"
})Handle large files efficiently with streaming support, progress tracking, and atomic write safety:
from chuk_virtual_fs import AsyncVirtualFileSystem
async def main():
async with AsyncVirtualFileSystem(provider="memory") as fs:
# Stream write with progress tracking
async def data_generator():
for i in range(1000):
yield f"Line {i}: {'x' * 1000}\n".encode()
# Track upload progress
def progress_callback(bytes_written, total_bytes):
if bytes_written % (100 * 1024) < 1024: # Every 100KB
print(f"Uploaded {bytes_written / 1024:.1f} KB...")
# Write large file with progress reporting and atomic safety
await fs.stream_write(
"/large_file.txt",
data_generator(),
progress_callback=progress_callback
)
# Stream read - process chunks as they arrive
total_bytes = 0
async for chunk in fs.stream_read("/large_file.txt", chunk_size=8192):
total_bytes += len(chunk)
# Process chunk without loading entire file
print(f"Processed {total_bytes} bytes")
# Run with asyncio
import asyncio
asyncio.run(main())Track upload/download progress with callbacks:
async def upload_with_progress():
async with AsyncVirtualFileSystem(provider="s3", bucket_name="my-bucket") as fs:
# Progress tracking with sync callback
def track_progress(bytes_written, total_bytes):
percent = (bytes_written / total_bytes * 100) if total_bytes > 0 else 0
print(f"Progress: {percent:.1f}% ({bytes_written:,} bytes)")
# Or use async callback
async def async_track_progress(bytes_written, total_bytes):
# Can perform async operations here
await update_progress_db(bytes_written, total_bytes)
# Stream large file with progress tracking
async def generate_data():
for i in range(10000):
yield f"Record {i}\n".encode()
await fs.stream_write(
"/exports/large_dataset.csv",
generate_data(),
progress_callback=track_progress # or async_track_progress
)All streaming writes use atomic operations to prevent file corruption:
async def safe_streaming():
async with AsyncVirtualFileSystem(provider="filesystem", root_path="/data") as fs:
# Streaming write is automatically atomic:
# 1. Writes to temporary file (.tmp_*)
# 2. Atomically moves to final location on success
# 3. Auto-cleanup of temp files on failure
try:
await fs.stream_write("/critical_data.json", data_stream())
# File appears atomically - never partially written
except Exception as e:
# On failure, no partial file exists
# Temp files are automatically cleaned up
print(f"Upload failed safely: {e}")Different providers implement atomic writes differently:
| Provider | Atomic Write Method | Progress Support |
|---|---|---|
| Memory | Temp buffer → swap | ✅ Yes |
| Filesystem | Temp file → os.replace() (OS-level atomic) |
✅ Yes |
| SQLite | Temp file → atomic move | ✅ Yes |
| S3 | Multipart upload (inherently atomic) | ✅ Yes |
| E2B Sandbox | Temp file → mv command (atomic) |
✅ Yes |
Key Features:
- Memory-efficient processing of large files
- Real-time progress tracking with callbacks
- Atomic write safety prevents corruption
- Automatic temp file cleanup on errors
- Customizable chunk sizes
- Works with all storage providers
- Perfect for streaming uploads/downloads
- Both sync and async callback support
Combine multiple storage providers in a single filesystem:
from chuk_virtual_fs import AsyncVirtualFileSystem
async def main():
async with AsyncVirtualFileSystem(
provider="memory",
enable_mounts=True
) as fs:
# Mount S3 bucket at /cloud
await fs.mount(
"/cloud",
provider="s3",
bucket_name="my-bucket",
endpoint_url="https://my-endpoint.com"
)
# Mount local filesystem at /local
await fs.mount(
"/local",
provider="filesystem",
root_path="/tmp/storage"
)
# Now use paths transparently across providers
await fs.write_file("/cloud/data.txt", "Stored in S3")
await fs.write_file("/local/cache.txt", "Stored locally")
await fs.write_file("/memory.txt", "Stored in memory")
# List all active mounts
mounts = fs.list_mounts()
for mount in mounts:
print(f"{mount['mount_point']}: {mount['provider']}")
# Copy between providers seamlessly
await fs.cp("/cloud/data.txt", "/local/backup.txt")
# Unmount when done
await fs.unmount("/cloud")
import asyncio
asyncio.run(main())Key Features:
- Unix-like mount system
- Transparent path routing to correct provider
- Combine cloud, local, and in-memory storage
- Read-only mount support
- Seamless cross-provider operations (copy, move)
Recommended for most users - Mount virtual filesystems without kernel extensions!
from chuk_virtual_fs import SyncVirtualFileSystem
from chuk_virtual_fs.adapters import WebDAVAdapter
# Create a virtual filesystem
vfs = SyncVirtualFileSystem()
vfs.write_file("/documents/hello.txt", "Hello World!")
vfs.write_file("/documents/notes.md", "# My Notes")
# Start WebDAV server
adapter = WebDAVAdapter(vfs, port=8080)
adapter.start() # Server runs at http://localhost:8080
# Or run in background
adapter.start_background()
# Continue working...
vfs.write_file("/documents/updated.txt", "New content!")
adapter.stop()Mounting in Your OS:
- macOS: Finder → Cmd+K →
http://localhost:8080 - Windows: Map Network Drive →
http://localhost:8080 - Linux:
davfs2or file manager
Why WebDAV?
- ✅ No kernel extensions required
- ✅ Works immediately on macOS/Windows/Linux
- ✅ Perfect for AI coding assistants
- ✅ Easy to deploy and test
- ✅ Background operation support
- ✅ Read-only mode available
Installation:
pip install "chuk-virtual-fs[webdav]"See: WebDAV Examples for detailed usage
Native filesystem mounting with full POSIX semantics.
from chuk_virtual_fs import AsyncVirtualFileSystem
from chuk_virtual_fs.mount import mount, MountOptions
async def main():
# Create virtual filesystem
vfs = AsyncVirtualFileSystem()
await vfs.write_file("/hello.txt", "Mounted!")
# Mount at /tmp/mymount
async with mount(vfs, "/tmp/mymount", MountOptions()) as adapter:
# Filesystem is now accessible at /tmp/mymount
# Any tool can access it: ls, cat, vim, TypeScript, etc.
await asyncio.Event().wait()
import asyncio
asyncio.run(main())FUSE Options:
from chuk_virtual_fs.mount import MountOptions
options = MountOptions(
readonly=False, # Read-only mount
allow_other=False, # Allow other users to access
debug=False, # Enable FUSE debug output
cache_timeout=1.0 # Stat cache timeout in seconds
)Installation & Requirements:
# Install package with FUSE support
pip install "chuk-virtual-fs[mount]"
# macOS: Install macFUSE
brew install macfuse
# Linux: Install FUSE3
sudo apt-get install fuse3 libfuse3-dev
# Docker: No system modifications needed!
# See examples/mounting/README.md for Docker testingDocker Testing (No System Changes):
cd examples
./run_example.sh 5 # Basic FUSE mount test
./run_example.sh 6 # TypeScript checker demoWhy FUSE?
- ✅ Full POSIX semantics
- ✅ Works with any tool expecting a filesystem
- ✅ Perfect for MCP servers - Expose virtual filesystems to Claude Desktop and other MCP clients
- ✅ Ideal for AI + tools integration (TypeScript, linters, compilers, etc.)
- ✅ True filesystem operations (stat, chmod, etc.)
MCP Server Use Case:
# MCP server exposes a virtual filesystem via FUSE
# Claude Desktop can then access it like a real filesystem
async def mcp_filesystem_tool():
vfs = AsyncVirtualFileSystem()
# Populate with AI-generated code, data, etc.
await vfs.write_file("/project/main.ts", generated_code)
# Mount so tools can access it
async with mount(vfs, "/tmp/mcp-workspace", MountOptions()):
# Claude can now run: tsc /tmp/mcp-workspace/project/main.ts
# Or any other tool that expects a real filesystem
await process_with_real_tools()See: FUSE Examples for detailed usage including Docker testing
| Feature | WebDAV | FUSE |
|---|---|---|
| Setup | No system changes | Requires kernel extension |
| Installation | pip install only |
System FUSE + pip |
| Compatibility | All platforms | macOS/Linux (Windows WSL2) |
| POSIX Semantics | Basic | Full |
| Speed | Fast | Faster |
| MCP Servers | ✅ Perfect - full tool compatibility | |
| Use Case | Remote access, quick dev | MCP servers, local tools, full integration |
| Best For | Most users, simple sharing | MCP servers, power users, full POSIX needs |
Which Should You Use?
- Building an MCP server? → Use FUSE - Claude and MCP clients need full POSIX semantics to run real tools
- Quick prototyping or sharing? → Use WebDAV - Works immediately, no system setup
- AI coding assistant with TypeScript/linters? → Use FUSE - Full tool compatibility guaranteed
- Remote file access? → Use WebDAV - Built for network access, mounts in Finder/Explorer
- Running in Docker/CI? → Use FUSE - No kernel extensions needed in containers
- Maximum performance with local tools? → Use FUSE - Native filesystem performance
mkdir(path): Create a directorytouch(path): Create an empty filewrite_file(path, content): Write content to a fileread_file(path): Read content from a filels(path): List directory contentscd(path): Change current directorypwd(): Get current directoryrm(path): Remove a file or directorycp(source, destination): Copy a file or directorymv(source, destination): Move a file or directoryfind(path, recursive): Find files and directoriessearch(path, pattern, recursive): Search for files matching a patternget_node_info(path): Get information about a nodeget_fs_info(): Get comprehensive filesystem information
stream_write(path, stream, chunk_size=8192, progress_callback=None, **metadata): Write from async iteratorprogress_callback: Optional callback function(bytes_written, total_bytes) -> None- Supports both sync and async callbacks
- Atomic write safety with automatic temp file cleanup
stream_read(path, chunk_size=8192): Read as async iterator
mount(mount_point, provider, **provider_kwargs): Mount a provider at a pathunmount(mount_point): Unmount a providerlist_mounts(): List all active mounts
- FUSE Mounting for MCP: Expose virtual filesystems to Claude Desktop and MCP clients
- MCP server maintains virtual filesystem with AI-generated code
- Mount via FUSE so Claude can run real tools (TypeScript, linters, compilers)
- Full POSIX semantics - works with ANY command-line tool
- Perfect for code generation → validation → iteration workflows
- See: examples/mounting/02_typescript_checker.py
Example MCP Integration:
# Your MCP server can expose a filesystem tool
@mcp.tool()
async def create_project(project_type: str):
vfs = AsyncVirtualFileSystem()
# Generate project structure
await vfs.write_file("/project/main.ts", generated_code)
# Mount so Claude can run tools on it
async with mount(vfs, "/tmp/mcp-workspace", MountOptions()):
# Now Claude can: tsc /tmp/mcp-workspace/project/main.ts
# Or: eslint /tmp/mcp-workspace/project/
# Any tool that expects a real filesystem works!
return "/tmp/mcp-workspace"Complete End-to-End MCP Workflow:
# 1. MCP Server Setup - your_mcp_server.py
from chuk_virtual_fs import AsyncVirtualFileSystem
from chuk_virtual_fs.mount import mount, MountOptions
import mcp
@mcp.tool()
async def generate_and_validate_typescript(code: str):
"""Generate TypeScript code and validate it with tsc."""
# Step 1: Create virtual filesystem with AI-generated code
vfs = AsyncVirtualFileSystem()
await vfs.mkdir("/project/src")
await vfs.write_file("/project/src/main.ts", code)
await vfs.write_file("/project/tsconfig.json", '''{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"strict": true
}
}''')
# Step 2: Mount the virtual filesystem via FUSE
mount_point = "/tmp/mcp-typescript-workspace"
async with mount(vfs, mount_point, MountOptions()):
# Step 3: Claude can now run REAL TypeScript compiler
result = await run_bash_command(f"tsc --noEmit {mount_point}/project/src/main.ts")
if result.exit_code != 0:
# Step 4: Return errors to Claude for fixes
return {
"status": "error",
"errors": result.stderr,
"path": mount_point
}
# Step 5: Success! Run linter for extra validation
lint_result = await run_bash_command(f"eslint {mount_point}/project/src/")
return {
"status": "success",
"typescript_check": "passed",
"lint_result": lint_result.stdout,
"path": mount_point
}
# 2. Claude Desktop sees this and can:
# - Call generate_and_validate_typescript() with AI-generated code
# - Get real TypeScript compiler feedback
# - Iterate on fixes based on actual tool output
# - Run any other tool (prettier, webpack, jest, etc.)What happens:
- Your MCP server creates a virtual filesystem with AI-generated content
- Mounts it via FUSE at a real path
- Claude Desktop runs actual tools (tsc, eslint, etc.) via MCP bash commands
- Tools see a real filesystem and work perfectly
- Results flow back to Claude for iteration
Why this is powerful:
- ✅ No mocking tool behavior - use real compilers and linters
- ✅ Works with ANY tool expecting a filesystem
- ✅ Full validation and error messages
- ✅ Claude can iterate based on real tool feedback
- ✅ Virtual filesystem = easy cleanup, no state pollution
-
WebDAV Mounting: Quick setup, no kernel extensions
- AI generates code, mount it via WebDAV, tools can access it immediately
- No system modifications required
- Perfect for running TypeScript, linters, formatters on AI-generated code
- See: examples/webdav/02_background_server.py
-
FUSE Mounting: Full POSIX integration for maximum tool compatibility
- AI generates TypeScript → mount →
tscchecks it → AI fixes errors - See: examples/mounting/02_typescript_checker.py
- AI generates TypeScript → mount →
-
Large File Processing: Stream large files (GB+) without memory constraints
- Real-time progress tracking for user feedback
- Atomic writes prevent corruption on network failures
- Perfect for video uploads, data exports, log processing
-
Multi-Provider Storage: Combine local, cloud, and in-memory storage seamlessly
- Mount S3 at
/cloud, local disk at/cache, memory at/tmp - Transparent routing to correct provider
- Mount S3 at
-
Cloud Data Pipelines: Stream data between S3, local storage, and processing systems
- Monitor upload/download progress
- Automatic retry and recovery with atomic operations
- Development sandboxing and isolated code execution
- Educational environments and web-based IDEs
- Reproducible computing environments
- Testing and simulation with multiple storage backends
- Cloud storage abstraction for provider-agnostic applications
- Share filesystems via WebDAV without complex setup
chuk-virtual-fs is part of the CHUK toolkit for building AI agents and MCP servers:
- chuk-virtual-fs - This library: Virtual filesystem with mounting (WebDAV/FUSE)
- chuk-mcp-server - MCP server framework that uses chuk-virtual-fs for workspace management
- chuk-tools - Command-line tools that work with mounted virtual filesystems
Example integration:
- Use
chuk-virtual-fsto create a virtual filesystem with AI-generated code - Mount it via FUSE or WebDAV
- Use
chuk-toolsor any standard tools to validate, lint, and process the code - Wrap it all in
chuk-mcp-serverto expose to Claude Desktop and other MCP clients
Perfect for:
- Building MCP servers that need filesystem workspaces
- Creating sandboxed environments for AI agents
- Tool-augmented AI workflows (code generation → validation → iteration)
- Python 3.8+
- Optional dependencies:
sqlite3for SQLite providerboto3for S3 providere2b-code-interpreterfor E2B sandbox providerwsgidavandcherootfor WebDAV mountingpyfuse3for FUSE mounting- System FUSE (macFUSE on macOS, fuse3 on Linux) for FUSE mounting
Contributions are welcome! Please submit pull requests or open issues on our GitHub repository.
MIT License
This library provides a flexible virtual filesystem abstraction. Always validate and sanitize inputs in production environments.