Unleash

Accelerate Development with the Unleash MCP server

AI coding assistants are everywhere. One minute you’re writing a simple function, the next minute they’re suggesting a full refactor, 200 lines deep, with tests you definitely didn’t ask for. According to recent research, three out of four developers already use AI tools every day, which means the “AI wrote this” moment is pretty much part of normal development now.

And honestly? It’s great. We get fewer boring tasks, faster prototypes, and a teammate who never gets tired of writing boilerplate. But there’s a catch. Sometimes, AI-generated code can be a little too enthusiastic. Studies show that stability dips as AI usage increases, and security researchers have found that AI-written code carries a noticeably higher rate of vulnerabilities.

Combine that with the fact that every team uses a different assistant and you end up with AI-generated features that don’t always play nicely together. Not just mismatched styles, but conflicting assumptions that can break real workflows.

This is exactly where feature flags come into play. They give you a safe way to test, contain, and roll back AI-generated changes without slowing down development. And if AI is going to help write more of your code, it should also follow the same rules you expect from humans: naming conventions, flag types, rollout patterns, and cleanup.

And that’s why we built the Unleash MCP server. It gives AI assistants a clear set of instructions for creating and managing feature flags properly. Instead of free-styling flag names or sprinkling conditionals everywhere, your assistant now understands how to follow FeatureOps best practices and your team’s conventions. When to create a flag, how to implement it in your language of choice, how to avoid duplicates, and even how to clean it up.

AI is moving fast. Feature flags help you move fast safely. And the Unleash MCP server helps your AI tools play by the same rules.

Why use an MCP server with Unleash?

The Unleash MCP server helps you get more out of both your FeatureOps platform and your AI assistants.

Here are a few scenarios where it shines.

Standardized flag creation

The MCP server enforces Unleash’s best practices for naming, typing, and documenting feature flags. Whether your team uses release flags for gradual rollouts or experiment flags for A/B testing, the server ensures consistent structure and intent across projects.

This means that when an AI assistant creates a flag in your repository, it automatically follows your organization’s conventions.

For example, instead of generating an arbitrary toggle, the assistant will understand the purpose of the change and provide meaningful details on flag creation:


{  
  "name": "new-checkout-flow",  
  "type": "release",  
  "description": "Gradual rollout of the redesigned checkout"  
}  

That’s the difference between ad-hoc automation and guided, policy-aligned automation.

Context-aware recommendations

The MCP server can analyze code changes and decide when a feature flag is necessary. Or when an existing one should be reused.
It considers the size and risk of a change, its purpose, and existing code patterns before making a recommendation.

For instance, if your AI assistant detects a high-risk change via the MCP tool in a payment service or authentication layer, it will automatically suggest adding a feature flag. But for a minor CSS fix or documentation update, it will skip the flag creation entirely.

This helps teams avoid flag fatigue and focus on the changes that actually need protection or gradual rollout.

Reduced duplication

Duplicate feature flags can quickly become a maintenance headache. The Unleash MCP server includes built-in detection logic that checks your Unleash instance and codebase for similar flags before creating a new one.

Combine that with the fact that you should not reuse old feature flag names, because doing so can accidentally bring an outdated feature back to life, and naming becomes even more important. Sometimes reusing a flag name makes sense if multiple parts of the system genuinely depend on the same functionality, but most of the time you need a brand new, unique name. The Unleash MCP server helps by analyzing the context of your change and deciding whether the name you are considering represents the same feature or whether you should generate a new flag to avoid conflicts.

Consistent implementation

Even with well-defined flags, implementation details often vary across projects and languages. The MCP server solves this by providing framework-specific code snippets and wrapping patterns, so your assistant can implement flags in a way that matches your stack and codebase.

For example, React Hooks.


// Guarding the new checkout flow in React  
const enabled = useFlag("new-checkout-flow");

return enabled ? (
  <NewCheckout onSuccess={trackConversion} />
) : (
  <LegacyCheckout />
);

Or Django views:


def checkout(request):  
    ctx = {"userId": str(request.user.id)}  
    if not unleash_client.is_enabled("new-checkout-flow", context=ctx):  
        return legacy_checkout(request)

    return new_checkout_experience(request)

Or Go handlers:

  
func (s *Server) Checkout(w http.ResponseWriter, r *http.Request) {  
    ctx := unleash.Context{UserId: userIDFromRequest(r)}  
    if !s.Unleash.IsEnabled("new-checkout-flow", unleash.WithContext(ctx)) {  
        legacyCheckout(w, r)  
        return  
    }  
    newCheckoutFlow(w, r)  
}  

By automatically generating code that fits each framework, the MCP server helps enforce consistency while saving time during rollout.

Streamlined cleanup

Feature flags are temporary by design, but cleanup often gets forgotten.
The cleanup_flag tool in the MCP server helps identify where a flag is used, suggests what can be safely removed, and even provides hints for validation before deletion.

This makes it easier for AI assistants (or humans) to generate pull requests that remove old code paths once a feature is fully rolled out, keeping your repositories healthy and up to date.

In short, the Unleash MCP server gives your AI development tools a clear, opinionated contract for working with Unleash, helping you move faster, maintain consistency, and reduce the risk of technical debt.

How it works

The Unleash MCP server connects an MCP-compatible agent (such as Claude Code or Codex) to your Unleash instance, exposing a focused set of tools for managing feature flags through code or AI assistants.

Instead of just calling the Admin API directly, the MCP server wraps Unleash logic, validation, and best practices into a consistent protocol. This means your AI agent can reason about when and how to create, update, or clean up feature flags, not just execute raw API calls.

 

Core workflow tools

The server revolves around four core tools that handle the complete lifecycle of a feature flag.

Together, they form the “evaluate → create → wrap” loop.

evaluate_change

Looks at a code change or description and determines whether it should be behind a feature flag. It considers factors like risk, whether the changed code is already protected by an existing flag, and the appropriate flag type. Then returns a recommendation with a confidence score.

It also orchestrates other tools automatically, such as detect_flag, to find potential duplicates.

detect_flag

Scans your codebase and recent commits for existing feature flags that match the described change. This helps prevent creating duplicate flags and promotes reuse.

It uses semantic matching, filename hints, and pattern detection across multiple languages.

create_flag

When a new flag is needed, this tool creates it through the Unleash API. It enforces naming conventions, validates metadata, and ensures that the flag is assigned the correct flag type (for example, release, experiment, operational, kill-switch, or permission).

wrap_change

Once the flag exists, this tool provides language- and framework-specific code templates to guard your new feature. Whether you’re using React, Django, Express, or Go, wrap_change gives you inline examples and search instructions to follow existing patterns in your codebase.

 

Supporting tools

Beyond the core workflow, the server also provides several supporting tools to handle rollout, maintenance, and cleanup tasks. These tools are useful both for manual automation and when building higher-level orchestration on top of MCP.

set_flag_rollout

Configures rollout strategies for a flag, such as percentage-based releases or user targeting. This tool focuses on strategy setup and doesn’t enable feature flags directly, giving you precise control over rollout conditions.

get_flag_state

Retrieves metadata for a feature flag, including its enabled/disabled state per environment, rollout strategies, and basic project context.

This is useful when you want to quickly check the current status of a flag in your IDE before deciding what to do next. For example, enabling an environment, adjusting a rollout, or confirming that a flag exists before wrapping new code.

toggle_flag_environment

Enables or disables a feature flag in a specific environment. This lets agents or automation scripts promote flags across environments safely, such as moving from staging to production after a successful test.

remove_flag_strategy

Removes a specific strategy from a flag in a given environment. Use this when cleaning up temporary rollouts or transitioning a feature from partial to full release.

cleanup_flag

Generates structured guidance for safely removing flagged code paths once you fully roll out a feature. This includes suggestions on which files or lines to delete and how to verify that cleanup is complete.

How tools work together

Each tool is small and focused, but together they form a clean orchestration model that AI agents can reason about:

  
evaluate_change  
   ├─ detect_flag  → checks for duplicates  
   ├─ create_flag  → creates new flag (if needed)  
   └─ wrap_change  → suggests implementation patterns  

The other tools can be called independently, or in sequence, as part of CI/CD or automated lifecycle management. This modular design keeps the server’s surface area small while allowing teams to plug it into broader development workflows without friction.

Quick setup

The MCP server works out of the box with any MCP-compatible agent and you can also run it standalone.

You’ll need the following to get started:

  • Node.js 18 or newer
  • Yarn or npm
  • Access to an existing Unleash instance
  • A personal access token with permission to create flags

1. Agent setup

For Claude Code, run the following command from your project root:


claude mcp add unleash \
  --env UNLEASH_BASE_URL={{your-instance-url}} \
  --env UNLEASH_PAT={{your-personal-access-token}} \
  -- npx -y @unleash/mcp@latest --log-level error

For Codex CLI:


codex mcp add unleash \
  --env UNLEASH_BASE_URL={{your-instance-url}} \
  --env UNLEASH_PAT={{your-personal-access-token}} \
  -- npx -y @unleash/mcp@latest --log-level error  

When using Cursor, add this to your mcp.json file:

  
{
  "mcpServers": {
    "unleash": {
      "command": "npx",
      "args": ["-y", "@unleash/mcp@latest"],
      "env": {
        "UNLEASH_BASE_URL": "value",
        "UNLEASH_PAT": "value",
      }
    }
  }
}

For VS Code and Copilot, add this to your .vscode/mcp.json file:

  
{  
  "servers": {
    "unleash-mcp": {
      "name": "Unleash MCP",
      "command": "npx",
      "args": ["-y","@unleash/mcp@latest","--log-level","error"],
      "env": {
        "UNLEASH_BASE_URL": "{{your-instance-url}}",
        "UNLEASH_PAT": "{{your-personal-access-token}}"
      }
    }
  }
}

For Windsurf (Cascade), add this to your ~/.codeium/windsurf/mcp_config.json file:

  
{  
  "mcpServers": {
    "unleash-mcp": {
      "command": "npx",
      "args": ["-y","@unleash/mcp@latest"],
      "env": {
        "UNLEASH_BASE_URL": "{{your-instance-url}}",
        "UNLEASH_PAT": "{{your-personal-access-token}}"
      }
    }
  }
}

2. Run it standalone with npx

You can also run the server manually with npx:


UNLEASH_BASE_URL={{your-instance-url}} \
UNLEASH_PAT={{your-personal-access-token}} \
UNLEASH_DEFAULT_PROJECT={{default_project_id}} \
npx unleash-mcp --log-level debug  

3. Local development

If you’d rather explore or contribute, clone the repository and start locally:

  
git clone https://github.com/Unleash/unleash-mcp.git  
cd unleash-mcp  
yarn install  
cp .env.example .env # edit your variables here  
yarn dev  


For full configuration details, check out the README on GitHub.

Example: Evaluating and wrapping a change

Here’s how an AI assistant (or you) might use the tools:

  
# Step 1: Evaluate the change  
evaluate_change --description "Add Stripe payment processing" --riskLevel high  

The server responds with a recommendation, for example:

  
{  
  "needsFlag": true,  
  "recommendation": "create_new",  
  "suggestedFlag": "stripe-payment-integration"  
}  

Then, create and wrap the flag:

  
create_flag --name stripe-payment-integration --type release  
wrap_change --flagName stripe-payment-integration --frameworkHint Django  

This produces a ready-to-use code snippet like:

  
if client.is_enabled("stripe-payment-integration"):  
    process_payment()  

Design principles

The Unleash MCP server follows a few simple design principles:

  • Thin surface area: Each tool does one thing well.
  • Explicit validation: Every request is validated before hitting the Unleash API.
  • Error normalization: Errors are standardized with helpful hints.
  • Best practices by design: Guidance from the Unleash documentation is built into every response.

You can explore the full architecture and source code on GitHub.

Give it a try and share your feedback

The Unleash MCP server is still experimental, and we’re looking for early adopters to test it, suggest ideas, and help shape its direction.

If you’re building internal tools, AI assistants, or automation workflows that use Unleash, we’d love your input.

Join the conversation in our Slack channel #mcp-server or open an issue on GitHub.

You can also reach us directly at beta@getunleash.io.

Share this article

Explore further

Product

Starting an experimentation program: Best practices from Yousician

  This article explores how Yousician built a successful experimentation program, covering platform selection, analytics infrastructure, feature flagging practices, and maintaining the balance between optimization and innovation. Selecting your initial experimentation platform When starting an experimentation program, the temptation to build a custom solution can be strong, particularly for engineering-focused organizations. Yousician initially considered this […]