Automated FeatureOps just got real: Impact Metrics + MCP server
Teams are shipping more code than ever, and AI is amplifying that trend even further. Whether the change comes from a human or an AI assistant, the expectations stay the same: ship fast, keep things stable, and avoid surprises in production.
We launched the Unleash MCP server last week, and now we’re rolling out Impact Metrics to go with it. Together, they form a solid foundation for safe, automated FeatureOps with AI in the loop.
Both features point in the same direction. Your codebase keeps growing, your AI tools keep automating more of your workflow, and Unleash now helps you automate the safety side of that workflow too.
Why these two features matter together
Impact Metrics give you real production signals tied to your feature flags. Things like request rates, error counts, memory usage, or p95 latency get collected straight from your app and visualized inside the Unleash UI.
The Unleash MCP server gives your AI tools a contract for managing flags safely. It guides assistants when creating, wrapping, and cleaning up feature flags.
When you put the two together, you can start doing things like:
- Automatically define which code changes require a feature flag and create those flags directly from your IDE.
- Automatically progress a rollout when metrics look healthy.
- Pause a rollout when latency or error rates spike.
- Let an AI assistant create flags or adjust strategies while still following your guardrails.
- Validate whether a new feature actually moves the needle, without wiring up a separate telemetry system.
This is what automated release progression starts to look like in practice: code is automatically wrapped in flags, signals come from your app, rollout and rollback decisions are safely automated, and your tools (including AI) follow the same rules every time.
Coordinating complex rollouts without the chaos
Modern teams rarely ship in a straight line. You might have a backend service rolling out a new code path, a frontend experiment running in parallel, and an AI assistant generating changes in a completely different repo. Coordinating all of that usually means dashboards, screenshots, and a Slack thread asking “is it safe to go to 50% yet?”.
Impact Metrics and the MCP server cut through that noise by giving every service the same shared signals and predictable rules. It’s the difference between guessing your way through a rollout and letting automation handle the boring parts with real data.
Impact Metrics: data that drives your rollout
Impact Metrics are time-series metrics you record directly from the Unleash SDK. No extra infra. No third-party dashboards. Just your app sending counters, gauges, and histograms to Unleash so you can track the behavior of a specific flag or release plan.
Some examples from real workflows:
- Increasing a rollout percentage only if error rates remain below a threshold
- Monitoring whether users are actually adopting a feature during rollout
- Tracking request durations (p50, p95, p99) while gradually turning something on
- Validating that a fix reduces error counts before fully enabling it
Here’s what that looks like in the UI:
You can create charts directly in Impact Metrics → New Chart, pick your metric (for example request_time_ms), choose a time window, and filter by app or environment. A few minutes later you get a clear view of how a feature behaves over time.
There are three types of Impact Metrics:
- counters for things that always go up
- gauges for values that change over time
- histograms for distributions like latency
The Node SDK already supports all three metric types. More SDKs are coming, so if you want your language supported, ping us at beta@getunleash.io.
How ingestion works behind the scenes
Behind the scenes, Impact Metrics are intentionally lightweight, but there are still some details worth knowing.
Metrics are batched on the same interval as regular SDK metrics, so you might see a small 1-2 minute delay between generation and ingestion. That’s normal.
If you run Unleash Edge, larger spikes can appear when an Edge instance reconnects and flushes its buffered data, which is expected behavior and not a sign of instability.
The idea is to give you reliable, low-maintenance signals without running a dedicated metrics pipeline. For most teams, this ends up being “just enough” to make release automation safe.
Turning metrics into actionable rollout logic
One thing we’ve heard from a lot of teams is that dashboards alone don’t actually make releases safer. They help you see what’s happening, but someone still has to interpret the data and decide what to do next.
Impact Metrics change that dynamic because the data becomes part of the rollout logic itself. Instead of a human saying “looks good, let’s bump to 50%,” you encode that decision in a release plan and let Unleash enforce it in real time.
It’s similar to how CI/CD removed human judgment from running tests. Now you can do the same for rollouts.
Automating release progression and rollbacks with metrics
Impact Metrics tie directly into release templates. Once configured, Unleash can evaluate live data during rollout and automatically decide whether to continue or pause.
Think along the lines of:
- “Hold at 25% for 48 hours, unless error_count per second goes above 10”
- “Automatically pause if p95 latency increases by more than 20%”
- “Move from canary to 50% rollout after 24 hours”
This means no manual toggling during off-hours, no babysitting dashboards, and fewer “sorry for the late-night rollback” moments. Your rollout rules become part of your release model instead of tribal knowledge.
Automated progression and safeguards are an early access feature available in Unleash Cloud, so if you want to try them reach out via Slack.
MCP server: your AI tools now follow your FeatureOps rules
We released the Unleash MCP server last week, and early adopters are already using it to keep AI-generated code from bypassing their normal flag workflows. The server plugs into tools like Claude Code, Cursor, Windsurf, or Codex and gives them a structured set of operations:
- evaluate_change to decide whether something needs a flag
- create_flag to generate a new flag with correct naming and metadata
- wrap_change to insert framework-specific guard code
- cleanup_flag to help delete flags once a feature is fully rolled out
- plus other supporting tools for rollout, enabling environments, and checking state
This shifts AI assistants from improvisation to policy-aligned automation. Instead of guessing how your team names flags or which flag type to use, they follow Unleash’s own rules based on best practices honed working with thousands of production deployments. They detect duplicates and match your framework conventions and internal best practices when wrapping code.
Keeping multi-repo and multi-language teams consistent
This becomes even more important in large organizations where code lives across multiple repos and languages. AI assistants are great at generating changes quickly, but they can also amplify inconsistency just as fast. A flag name created in Go might not match naming patterns in a TypeScript service. A Django snippet might not align with how your React team wraps UI components.
Over time, this fragments your FeatureOps model and makes automation harder. The MCP server helps prevent that drift by giving every assistant a shared contract, so the behavior is predictable no matter where the change is happening.
With Impact Metrics in the mix, AI agents can reliably automate more of your release workflow while you stay in control of the safety model.
Putting it together: automated FeatureOps with real guardrails
Here’s the workflow we see teams moving toward:
- A developer (or AI) proposes a code change.
- The Unleash MCP server evaluates the change and decides whether a feature flag is needed.
- The AI assistant creates and wraps the code change in a flag following your conventions.
- You start rolling out the feature using a release template.
- Impact Metrics track real behavior during rollout.
- Unleash automatically progresses or pauses based on the metrics.
- After the rollout finishes, the assistant uses cleanup tools to remove old code paths.
This is where the platform is heading: automated, data-driven releases that still respect engineering rules and real-world safety constraints.
Want to help shape what comes next?
Both Impact Metrics and the MCP server are evolving quickly, and we want feedback from real teams building real systems.
If you want to experiment with impact-driven rollouts, automated progression, safeguards, or AI-powered FeatureOps, join our community Slack or reach out directly at beta@getunleash.io.
The whole point of these features is to help teams move faster without losing control. If you’re experimenting with automation or building internal tooling, we’d love to hear what you need next.