Unleash

AI Governance starts at runtime

The Cloud Security Alliance just published The State of AI Security and Governance, a report commissioned by Google Cloud based on a survey of 300 IT and security professionals. The headline finding is straightforward: organizations with mature AI governance adopt AI faster, actively train their teams on AI tools, and report significantly higher confidence in securing their AI systems.

The finding that caught our attention is more specific. Among organizations with comprehensive governance policies, 46% have already adopted agentic AI. Among those with policies still in development, only 12% have. Governance isn’t slowing adoption down. It’s what makes adoption possible.

This matters because AI is no longer an experiment for most organizations. According to the report, 60% are either already using agentic AI or plan to within 12 months. And 54% of organizations now use public frontier LLMs like GPT-4, Claude, or Gemini as part of their operations.

Yet only 27% of respondents said they feel confident they can secure AI used in core business operations.

That gap between adoption speed and governance readiness is where things get interesting. And it’s where we think runtime controls have a role that most organizations haven’t considered yet.

Governance is a multiplier, not a brake

The CSA report breaks down AI readiness by governance maturity, and the pattern is consistent across every metric. Organizations with comprehensive governance policies report 48% confidence in protecting their AI systems. Those with partial guidelines drop to 23%. Those still developing policies sit at 16%.

The same gradient shows up in staff training: 65% of organizations with comprehensive governance actively train their teams on AI tools, compared to much lower rates at less mature organizations. And when it comes to AI security experimentation, 70% of governance-mature organizations have already tested AI in their security workflows, versus 43% and 39% for less mature peers.

As Dr. Anton Chuvakin, Security Advisor in the Office of the CISO at Google Cloud, put it:

“As organizations shift from experimentation to full operational deployment, strong security practices and mature governance are emerging as the critical differentiators for successful AI adoption.”

This reframes governance entirely. It isn’t a compliance checkbox that gates releases, but rather the foundation that gives teams the confidence to move quickly.

What does governance actually look like in practice? The report shows that only 26% of organizations have comprehensive AI governance policies today (44% among large enterprises). Most are somewhere in the middle, with partial guidelines or policies under development. The question is what kind of governance infrastructure closes that gap without creating bureaucratic drag.

Agentic AI is accelerating faster than controls

Here’s where the urgency comes in. Agentic AI is different from the AI tools most teams have been using over the past few years. A code completion tool suggests snippets that a developer reviews and accepts. An agentic AI system plans, executes, and iterates on its own. It reads files, runs tests, calls APIs, and makes commits.

This is enormously productive. It also changes the governance equation. When a human reviews every line of code, governance is embedded in the review process itself. When an AI agent makes dozens of decisions autonomously before a human sees the result, governance needs to live somewhere else.

The CSA survey found that 73% of respondents are neutral or not confident in their organization’s ability to execute an AI security strategy. Meanwhile, 60% are deploying or planning to deploy agentic AI within the next year. That combination is worth paying attention to.

DORA’s research on AI-assisted software development confirms the pattern from a different angle: delivery stability tends to decrease as AI usage increases. Speed is going up. Confidence is not keeping pace.

The missing piece, in our view, is runtime control.

When a developer or AI assistant writes a code change, that change needs to be wrapped in controls that are independent of who or what wrote it. Progressive rollout, automated safeguards, and instant rollback. This is what we call FeatureOps: the discipline and best practices that allow you to control software behavior at runtime. It doesn’t matter whether the code was written by a senior engineer, a junior developer, or an AI agent. Every change gets the same governance treatment at the point where it actually reaches users.

Runtime control is necessary. It’s not sufficient.

But runtime control is only half the picture. Governance also means controlling who can release, who can approve changes, and maintaining a complete record of every action taken. When AI agents are writing code, the humans responsible for reviewing and approving releases become even more important. You need change approval workflows that enforce the four-eyes principle before any flag goes live. You need granular access controls so the right people own the right environments. And you need audit logs that capture every state change, not just code commits, so you can answer “who released what, when, and to whom” during any incident or compliance review.

We built Unleash to bring all of this together. Here’s how the platform maps to the AI governance challenges the CSA report highlights:

  • Controlling who can release AI-powered features: Granular role-based access controls by project, environment, and flag.
  • Enforcing review before changes go live: Change request workflows with configurable approval requirements.
  • Tracking every change for compliance and audits: detailed audit logs of every state change, login, and API request.
  • Limiting blast radius of AI-generated code: Progressive rollout strategies with gradual exposure controls.
  • Responding instantly when something goes wrong: Kill switches and instant rollback at runtime, no redeployment needed.
  • Automating governance for AI coding assistants: Autonomous Feature Management via the Unleash MCP server.
  • Monitoring production impact of new features: Impact Metrics with automated safeguards that pause unhealthy rollouts.
  • Meeting SOC2, ISO27001, and regulatory requirements: Built-in governance with SSO, SCIM, RBAC, and full audit trails.

A developer or AI assistant writes a code change. The Unleash MCP server evaluates the risk and wraps the change in a feature flag with the right targeting and rollout strategy. The platform progresses the release based on real production signals like error rates and latency. If metrics stay healthy, the rollout advances on its own. If something spikes, it pauses instantly. No manual toggling required.

Governance happens automatically, following the policies your platform team defined.

The skills gap is real. Runtime controls help anyway.

The third finding worth going deep on: 61% of respondents cited “understanding AI risks” as their top hurdle for AI security. 53% pointed to skills gaps. 52% said lack of knowledge.

At the same time, the report shows a notable imbalance in how organizations prioritize AI-specific threats. Sensitive data exposure is the top concern at 52%, which makes sense. But prompt injection comes in at just 5%, and data poisoning at 10%. These are risks that security researchers have flagged repeatedly, yet most organizations haven’t built practical defenses around them.

This isn’t surprising. AI security is a genuinely new discipline, and most teams don’t have deep expertise in model-level threats yet. Hiring for these skills is competitive, and building internal knowledge takes time.

But here’s the practical insight: you don’t need every engineer on your team to be an AI security specialist if every AI-powered change goes through progressive rollout with production observability and instant rollback.

Consider what happened to Google Cloud in June 2025. A single backend change introduced a null pointer exception that cascaded into a global outage lasting more than three hours. Google’s own postmortem concluded that if the change had been wrapped in a feature flag, the issue would have been caught in staging. Later that year, Cloudflare committed to enabling more global kill switches after a routine configuration update caused over five hours of downtime.

These are two of the most mature engineering organizations in the world. If they can get caught by a missing runtime control, the pattern applies everywhere. And as AI agents produce more of the code running in production, the blast radius of any single uncontrolled change only grows.

The practical response isn’t to slow down AI adoption until every risk is fully understood. It’s to treat every change as reversible. Progressive rollout limits blast radius. Automated safeguards catch regressions before they reach all users. Kill switches let you disable problematic functionality in seconds. Audit logs capture who changed what, when, and why, satisfying the compliance requirements that 50% of CSA respondents flagged as a challenge.

This is governance that works at the speed AI is moving. And you don’t need a team of AI security experts to implement it.

What this means for your team

Hillary Baron, the CSA report’s lead author, noted that “organizations are shifting from experimentation to meaningful operational use” and that “there are encouraging signs in the progress they’re making.” We agree. The awareness is there. What’s missing for most organizations is the operational infrastructure to act on it.

If you’re running AI-powered features in production, or planning to within the next year, here’s what the data suggests you should be thinking about:

  • Start with governance infrastructure that doesn’t create bottlenecks. Role-based access controls, change approval workflows, and audit trails make it possible for large teams to move fast without losing visibility. Your platform team defines the policies once. Every release follows them automatically.
  • Wrap every change in runtime controls. Feature flags give you progressive rollout, instant rollback, and production observability for every change, regardless of whether it was written by a human or an AI assistant. The flag becomes the governance layer.
  • Invest in automated safeguards. Define thresholds for error rates, latency, and other production signals. Let the system advance or pause rollouts based on real data. This is how you scale governance without scaling the review bottleneck.

The CSA report makes a strong case that governance maturity is the strongest predictor of AI readiness. We’d add one thing: the governance that matters most is the governance that operates at runtime, where your software actually meets your users.

Get started with Unleash to bring runtime controls to your release process today.

Share this article