MCP and the future of AI-driven software delivery
In just a few short years, AI has reshaped how software is developed. While early code assistants focused on predictive code completion, today’s tools go much further. They can reason, plan, and take meaningful action in complex development environments.
What sets these new LLM-powered assistants apart is their ability to access external systems, analyze real-time data, and apply logic based on dynamic context. Function calling was a key early step; it allowed models to call external tools to handle specialized tasks using structured inputs. But a new advancement is pushing things even further: Model Context Protocol (MCP) makes it far easier to connect language models with the tools, services, and data sources developers rely on every day, helping them work faster and solve problems more efficiently.
In this newsletter, we’ll explore what MCP is, how it works, and why it’s generating a lot of excitement about the future of intelligent, autonomous developer workflows. We’ll also walk through how CircleCI is putting MCP into practice to unlock new levels of automation and productivity in your CI/CD pipelines. Let’s dive in!
What is MCP?
The Model Context Protocol (MCP) is a standardized way for language models to understand and interact with external tools, data sources, and services. It defines how context is presented and how tools can be invoked, giving models the ability to take meaningful action, not just generate text.
To understand what makes MCP so powerful, it helps to break down the name:
Model refers to a large language model (LLM), like GPT-4 or Claude — systems that generate output based on input. These are the same models powering AI coding assistants like Claude Code, Cursor, Windsurf, and Lovable, which help developers write, navigate, and reason about code more effectively.
Context is everything the model has access to when making decisions. That includes the current prompt, previous interactions, and structured external information like documentation, file structures, or available APIs. Richer context leads to smarter, more relevant responses. Before MCP, managing that context was cumbersome and inconsistent.
Finally, a protocol is a set of rules that standardizes how information is exchanged between systems. In the case of MCP, it defines a consistent way for external tools to share information (context) with models: what they do, how to call them, and what inputs they expect.
Without a standard like MCP, every model-to-tool connection had to be built manually, one tool, one model, one integration at a time. MCP replaces that complexity with a single, reusable interface: developers configure an MCP client once, register the servers they want to use, and any compatible model can interact with them. Tool providers (rather than end users or model providers) are responsible for exposing functionality through their MCP servers so models can discover and use those capabilities automatically.
MCP was developed by Anthropic, the team behind Claude, and released as an open standard on November 25, 2024. While early adopters in the open-source and research communities showed interest, it wasn’t until March 2025—when OpenAI, Anthropic’s primary competitor, announced it would adopt the standard for its own models—that MCP’s momentum became undeniable across the AI ecosystem.
Since then, interest in MCP has surged as developers increasingly embrace vibe coding, a new way of working in which AI assistants help explore, edit, test, and debug code through fluid, conversational interaction. As developers push for more dynamic, tool-driven workflows, the ecosystem has responded quickly: tool providers are racing to meet the demand by building MCP servers to make their services accessible in today’s dev environments.
To generate this data, we programmatically queried the GitHub API to identify public repositories with “MCP” in their titles. The results were aggregated by repository creation date, allowing us to visualize the growth trend of MCP server projects over time. A clear inflection point is visible in March 2025.
How does MCP work?
MCP organizes these interactions around a simple, scalable host-client-server architecture. Each part of the system plays a clear role:
Host: The environment where the AI assistant runs, such as an IDE, a browser-based dev tool, or a desktop application. It manages the user session, embeds the MCP client, and coordinates interactions between the user, the model, and MCP servers.
Client: The runtime component embedded in the host. It handles tool discovery, manages communication with external servers, and translates model requests into structured, executable actions.
Server: A service that exposes external tools, resources, or prompts to models. Servers manage the real integrations, whether that’s interacting with APIs, fetching files, running tests, or pulling production data.
The host connects the model to the client. The client discovers available MCP servers, requests capabilities, and sends invocation requests. Servers respond using standardized JSON-RPC 2.0 messages, communicating either over local STDIO (for on-device integrations) or HTTP + Server-Sent Events (for remote connections).
By clearly separating these responsibilities, MCP makes it easy to plug new capabilities into AI workflows without needing custom, one-off integrations for every tool or system.
MCP defines three primary types of capabilities:
Tools – Executable functions the model can call, with defined inputs and outputs (e.g., runTests, getPRStatus, fetchBuildLogs).
Resources – Structured external data the model can use for context, such as documentation, schemas, file trees, or configurations.
Prompts – Predefined templates that guide the model’s behavior in specific workflows or interaction patterns.
If all this sounds familiar, it's because MCP was heavily inspired by the Language Server Protocol (LSP), the architecture that standardized how editors and IDEs talk to language backends (like the one powering CircleCI’s VS Code extension). Just like LSP unlocked intelligent features across developer tools, MCP brings that same modularity and consistency to AI-powered development environments.
By following this structure, MCP enables teams to build assistants that are truly context-aware, task-capable, and easy to scale across the software delivery lifecycle.
A note from our CTO
Engineers are wired to make things better.
You spot problems. You create solutions. You build systems that millions depend on, even if the world rarely sees the care behind them.
That mindset shaped how we approached MCP.
MCP gives AI assistants a new way to interact with real systems. We built the CircleCI MCP server so your pipelines and your build signals are accessible exactly when and where you need them.
Because honoring the way you work means giving you tools that adapt, assist, and strengthen what you already do best.
We're proud to bring that power into your hands—and proud to be the only CI platform built to fuel deeper thinking, sharper systems, and stronger teams with every commit.
- Rob Zuber, CircleCI CTO
What can MCP do for you?
For developers and engineering teams, MCP gives AI coding tools structured access to the systems they rely on, making assistants more accurate, more helpful, and far easier to integrate into real workflows.
AI coding assistants are powerful, but they can hit a ceiling when they don’t know what tools you’re using or what’s happening in your environment. MCP lifts that ceiling.
With MCP, assistants can interact with:
Code and CI tools like GitHub, GitLab, and CircleCI to understand PRs, builds, and code quality.
Collaboration platforms like Slack to retrieve messages, notify teammates, or summarize threads.
Knowledge and file systems like Google Drive or PostgreSQL to look up internal docs, schemas, or config files.
Observability tools like Sentry or Axiom to retrieve logs and surface relevant alerts during development.
Automation and web tools like Puppeteer or Brave Search to fetch content, fill in context, or simulate user workflows.
With growing support from widely used tools and cloud platforms, MCP enables assistants to help with the full development lifecycle, from writing code to reviewing PRs, debugging issues, coordinating releases, and communicating with your team.
For developers, that means faster answers, smarter suggestions, and fewer tabs. For teams, it means AI you can trust to work across your stack, without rebuilding your tooling from scratch.
To show how this works in practice, let’s take a look at how CircleCI is using MCP to make software delivery faster, smarter, and more resilient.
CircleCI + MCP: A new layer of intelligence for CI/CD
At CircleCI, we believe AI assistants shouldn’t just help you write code, they should help you ship it.
That’s why we’ve built an MCP server to make CircleCI workflows accessible to AI assistants in a structured, meaningful way. It allows models to query pipeline history, retrieve detailed failure logs, identify flaky tests, and validate configuration files, all without developers needing to leave their environment or dig through dashboards.
With CircleCI’s MCP server, your AI assistant can securely access:
Logs from failed jobs and pipelines, either via direct URLs or local project context
Analysis of flaky tests in your project history
Validation results for your .circleci/config.yml
Build and workflow statuses for current and recent pipeline
Imagine you're starting a new feature branch. Before you even write a line of code, your assistant could check the health of your recent builds, warn you about unstable pipelines, even suggest optimizations for your CircleCI configuration to speed up your feedback loop.
As you build and push commits, it could monitor pipelines in the background, proactively summarizing build statuses, surfacing linting errors, and flagging flaky tests inside your editor without breaking your flow.
When you open a pull request, the assistant could automatically audit CI signals, check for misconfigurations, and highlight test instability, helping reviewers focus their attention where it matters most.
And when you're ready to merge and deploy, it could validate that all required jobs passed, validate deployment-critical config, and double-check for known flakiness quietly and automatically.
Even post-deploy, it could keep monitoring, linking failures or new test instability back to specific builds or configuration changes, helping your team triage faster without endless dashboard switching.
Today, the CircleCI MCP server powers fast feedback with build logs, test insights, and config validation. But that's just the start. We're building tools to manage test data, validate AI-driven output, improve observability, and automatically fix broken pipelines before they slow you down.
Our goal in building MCP functionality is to give developers AI copilots that understand the full path code takes to production, providing early warnings, real-time insights, and automated validation at every step.
Conclusion
The Model Context Protocol represents a major shift in how AI systems interact with the real world. By creating a common language between models and external tools, MCP moves us beyond isolated chat interfaces and toward assistants that can reason, act, and collaborate meaningfully within complex technical environments.
As MCP adoption grows, we’ll see AI assistants helping developers push more changes, faster. But speed alone isn’t enough. Every change still needs to be validated, every failure diagnosed, and every deployment verified against real-world conditions.
That’s where CircleCI will continue to play a critical role: providing the signals, the guardrails, and the feedback loops that teams need to turn more and faster code into better code, safely delivered to users. We’re here to support developers as they build what’s next, with smarter tools, stronger systems, and a faster path from idea to production.
If you’re excited about where AI-powered development is headed, now is a great time to dive in. Sign up for a free CircleCI account, check out our blog and MCP cookbook for inspiration, and start building better today.