Unleash

Self-hosting feature flags with Unleash and Helm Charts

Self-hosting feature flags can feel like a superpower. You get the flexibility to run your own stack, keep data inside your boundaries, and scale in ways that match your infrastructure and security model.

Some feature flag providers only offer SaaS. That works fine until you hit enterprise requirements or strict data governance rules. With Unleash, you don’t have to compromise. It’s open source, and you can run it however and wherever you like.

If you’re already investing in Kubernetes, running Unleash inside the same platform gives you consistent security controls, predictable performance, and the chance to treat feature flags like any other internal service. The easiest and cleanest way to do that is with the official Helm charts.

Why self-host Unleash

When you own the deployment, you get full control of the environment. That means your own database, your own secrets management, whatever observability stack you already trust, and network rules that never let flag evaluation data leave your cluster.

You can wire it into internal authentication flows if your company requires SSO for everything. You can also size it based on your real traffic pattern, not the constraints of a vendor’s shared cloud.

And just as important, you avoid lock-in. If your infrastructure evolves, your Unleash deployment can evolve with it.

In other words, Unleash lets you control the stack from top to bottom.

You could run the Docker image manually, but…

The typical approach is to deploy the official Docker image and start stitching things together: a deployment here, a secret there, maybe a hand-rolled PostgreSQL manifest, plus all the small things like probes, scaling config, and ingress rules.

You could absolutely get a production setup this way, but it requires a fair bit of ongoing glue work. Upgrades become manual chores. Changes need careful attention to YAML drift across environments. It is still Kubernetes, but it doesn’t feel like platform engineering.

Helm solves that by packaging best practices into one installable bundle. Instead of juggling multiple manifests, you install one chart and focus on the settings that matter.

Installing Unleash with Helm

The basic install flow is simple. First, add the chart repo, then install:


helm repo add unleash https://docs.getunleash.io/helm-charts
helm repo update
helm install unleash unleash/unleash

You’ll get a running Unleash instance right away. Out of the box, you get liveness and readiness checks, sensible defaults for replicas, and a PostgreSQL instance that works well for testing or small internal deployments.

In production, most teams plug in their own database. Turning off the bundled database and pointing to your managed PostgreSQL, Google Cloud SQL, or Amazon Aurora instance only takes a few values overrides.

You stay fully in control of your persistence layer and backup strategy, which is exactly what you want in an enterprise environment.

High availability and scaling

Unleash plays nicely in horizontally scaled environments. The Helm chart defaults to multiple replicas so that traffic keeps flowing even during upgrades or node drains.

Behind the scenes, the Unleash server is stateless and the state lives in PostgreSQL, which makes scaling out simple. As long as all instances share the same database, any replica can handle API requests or serve the admin UI. For most teams, that means you can increase replicaCount and be confident that new pods will slot in cleanly.

From there, you can start layering on reliability. Pod anti-affinity rules help spread replicas across availability zones. A PodDisruptionBudget ensures that node upgrades or restarts don’t take the entire deployment offline. Teams running in production often use HorizontalPodAutoscaler to adjust capacity dynamically, or pair it with a VerticalPodAutoscaler to fine-tune resource requests over time. The Helm chart supports all of that natively through configuration.

Observability and multi-region setup

Observability plays a key part here too. Unleash exposes detailed metrics through the /internal-backstage/prometheus endpoint, giving you insights into request throughput, event-loop lag, and overall service health. The chart ships with a configurable serviceMonitor resource for those using Prometheus and the Prometheus Operator, so you can turn on scraping without extra manifests. Once metrics are flowing, it’s straightforward to visualize performance in Grafana or hook into autoscaling systems like KEDA. For example, scaling on event-loop lag provides a more precise signal than CPU utilization for Node.js services, helping you respond to real traffic pressure instead of raw compute load.

Beyond a single region, Unleash also fits neatly into multi-region architectures. If your database layer supports replication, you can run Unleash clusters in multiple regions, each reading from local replicas and writing back to a shared global database. For example, with Amazon Aurora Global Database or Google Cloud Spanner. When your database is multi-regional, Unleash can be too. This approach reduces latency for local traffic and improves fault tolerance if an entire region goes down.

Whether you’re distributing pods across zones in one region or extending deployments across multiple regions, the Helm chart gives you the flexibility to design for your desired level of resilience.

For example, using topologySpreadConstraints to evenly distribute Unleash pods across availability zones:


topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: topology.kubernetes.io/zone
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        app.kubernetes.io/name: unleash

The key is that Unleash doesn’t impose a single pattern, it adapts cleanly to however you’ve structured your Kubernetes and database layers.

Running on managed Kubernetes services

You don’t need a fancy on-prem cluster to do this. The same Helm chart works on Amazon EKS, Google Kubernetes Engine, Azure Kubernetes Service, and pretty much any conformant Kubernetes platform, including lightweight setups like k3s.

If you’re running GitOps with ArgoCD or Flux, treating Unleash as a Helm chart means your entire feature flag system becomes declarative and versioned. That’s a big win for reliability and audits.

Customizing your deployment

Most production users drop in a custom values.yaml with ingress rules, secrets, database details, and availability settings. From there it fits right into your platform pipeline.

Want TLS handled by cert-manager? Easy.

Need to inject custom environment variables for SSO? Supported.

Prefer a separate secret store like AWS Secrets Manager or Google Secret Manager? Also supported (check out this example).

In other words, Unleash does not assume how your environment works. It gives you the knobs and lets you dial in the setup that fits your stack.

Security and secrets handling

Self-hosting gives you control, and with control comes responsibility. The nice thing is that Unleash doesn’t fight you here. The Helm charts plug cleanly into the following common security patterns.

Database credentials and secrets

Instead of hardcoding credentials in values files, use Kubernetes Secrets or your preferred secret operator (like External Secrets Operator or SOPS). The Helm chart already supports secret references, so you can inject credentials securely at runtime.

RBAC and service accounts

Each deployment can run with a dedicated service account and minimal RBAC permissions. If your cluster policies require namespace-isolated workloads, network policies, or PodSecurity admission rules, Unleash fits right in.

TLS and ingress

For external access, pair the chart with cert-manager to issue certificates automatically. If you’re running internal-only access, keep it behind a private ingress or mesh gateway. Plenty of teams deploy Unleash strictly inside private VPC networking with no public exposure at all.

Unleash Edge support

If you need lightning-fast flag delivery at the edge or across distributed environments, the Helm chart repo also includes a chart for Unleash Edge.

Edge acts as a fast, local cache for flag data and reduces round-trip latency between services and the core Unleash API. That matters when you’re running high-throughput APIs, global workloads, or serverless functions that spin up and down often.

By running Unleash Edge in your cluster, or even at the edge of your network, you keep evaluation latency predictable while still centralizing flag configuration in the main control plane.

It also gives you a clean separation of concerns: the central Unleash instance handles management and auditing, while Edge handles fast delivery and scaling out to traffic-heavy workloads (including offline support).

The nice part is that the Edge chart follows the same conventions as the main chart, so you don’t have to reinvent your deployment approach.

Operational maturity

At some point, every internal service reaches the “okay, this is critical now, we need a real plan” moment. Feature flags definitely fall into that category. Once product teams depend on them to ship safely, downtime or data loss is not an option.

 

Backups and database durability

Managed PostgreSQL makes this straightforward. If you’re self-managing PostgreSQL, make sure to set up automated backups and periodic restore testing.

Losing feature flag history or strategy configurations can disrupt rollouts.

Disaster recovery patterns

Most teams start with multi-AZ redundancy. For global platforms, a second region with a warm standby database and periodic syncs can make sense.

The good news is that Unleash traffic patterns are predictable, so scaling and failover planning isn’t painful.

Upgrades and version pinning

Helm gives you a sane upgrade path. Pin your chart version in Git, test upgrades in staging, and follow semantic versioning signals in the repo. If you’re running GitOps, you get drift detection and automated rollouts for free.

The long-term reality: Unleash is stable, the upgrade path is smooth, and the operational footprint is much lighter than most internal control plane services. Still, treat it like core infrastructure, because it is.

Platform engineering lens

Feature flags are not just a tool for developers. They are a building block for real platform engineering. Teams building internal developer platforms are increasingly designing “golden paths” for shipping safely, and Unleash fits neatly into that vision.

A typical flow looks like the following.

First, a developer pushes code, CI builds and deploys to a preview or staging environment, automated checks run, and rollout policies are applied. Instead of merging and hoping everything goes well, a feature flag gates the release. Product folks can turn it on only for internal users, then for beta testers, and finally roll out globally once metrics look good.

Second, the internal platform provides this capability as a standard service, not a one-off tool. That makes Unleash part of the same paved-road experience as CI/CD, secrets management, and observability. When you treat feature flags like core platform infrastructure, teams move faster and incidents drop. Developers focus on code; the platform handles safety.

Third, for companies building developer portals like Backstage or Port, Unleash often becomes a native plugin: create a flag right from the service catalog, attach rollout strategies, and track exposure without leaving the portal. That is where feature flags start feeling like part of the platform instead of a bolted-on utility.

Final thoughts

Self-hosting is about control, trust, and being able to grow the system around your needs. With the official Helm charts, running Unleash on Kubernetes stops being a manual YAML exercise and becomes a proper platform-ready deployment.

You keep the flexibility that comes with open source, and you get the operational polish you expect from a production service.

It only takes a few commands to get started, and from there, you can scale, secure, and automate like you would with any other core service in your cluster.

Share this article

Explore further

Product

Starting an experimentation program: Best practices from Yousician

  This article explores how Yousician built a successful experimentation program, covering platform selection, analytics infrastructure, feature flagging practices, and maintaining the balance between optimization and innovation. Selecting your initial experimentation platform When starting an experimentation program, the temptation to build a custom solution can be strong, particularly for engineering-focused organizations. Yousician initially considered this […]