Unleash

Rolling deployment vs progressive delivery: Choosing a deployment strategy

Rolling deployment

Rolling deployment is a gradual deployment strategy where new application versions are released incrementally across a fleet of servers or containers. Instead of updating all instances simultaneously, the deployment process replaces old versions with new ones in batches, typically one or a few instances at a time. This approach ensures that the application remains available throughout the deployment process, as some instances continue serving traffic while others are being updated.

The rolling deployment strategy provides a balance between deployment speed and risk mitigation. If issues arise during the deployment, the process can be halted, and the remaining instances continue running the stable version. However, during the deployment window, different versions of the application run simultaneously, which may require careful consideration of backward compatibility and database schema changes.

Progressive delivery

Progressive delivery extends traditional deployment strategies by incorporating advanced traffic management, monitoring, and automated decision-making throughout the release process. This approach combines deployment techniques like canary releases, blue-green deployments, and feature flags with real-time observability to gradually expose new features to users based on predefined criteria and performance metrics. The strategy emphasizes controlled exposure and the ability to quickly respond to issues through automated rollbacks or traffic steering.

Unlike simple deployment strategies, progressive delivery treats the release process as an ongoing experiment where the impact of changes is continuously measured and evaluated. Teams can start by exposing new features to a small percentage of users, monitor key performance indicators, and gradually increase exposure based on success criteria. This data-driven approach reduces risk by catching issues early and provides the flexibility to adjust the rollout strategy in real-time based on user feedback and system performance.

Comparison

Deployment scope

  • Rolling Deployment: Focuses on the technical process of updating application instances across infrastructure
  • Progressive Delivery: Encompasses the entire feature release lifecycle including user exposure and business impact

Traffic control

  • Rolling Deployment: Traffic naturally shifts as instances are updated, with limited granular control
  • Progressive Delivery: Provides fine-grained traffic routing and user segmentation capabilities

Risk management

  • Rolling Deployment: Mitigates risk through gradual instance updates and the ability to halt deployments
  • Progressive Delivery: Manages risk through controlled user exposure, automated monitoring, and intelligent rollback mechanisms

Complexity

  • Rolling Deployment: Relatively simple to implement and understand, focusing on infrastructure concerns
  • Progressive Delivery: More complex, requiring sophisticated tooling for traffic management, monitoring, and automation

Decision making

  • Rolling Deployment: Typically follows a predetermined schedule with manual intervention when issues occur
  • Progressive Delivery: Leverages automated decision-making based on real-time metrics and predefined success criteria

Feature flags in rolling deployments

Feature flags complement rolling deployments by providing an additional layer of control over feature activation independent of the deployment process. During a rolling deployment, feature flags allow teams to deploy code to all instances while keeping new features disabled until the deployment is complete and validated. This separation of deployment and release reduces the complexity of rollbacks, as problematic features can be instantly disabled without requiring a code deployment or interrupting the rolling update process.

When issues arise during a rolling deployment, feature flags enable quick mitigation by allowing teams to disable specific features while keeping the new application version running. This approach is particularly valuable when the deployment process reveals that certain features work well while others cause problems, eliminating the need for a complete rollback to the previous version.

Feature flags in progressive delivery

Feature flags are fundamental to progressive delivery strategies, serving as the primary mechanism for controlling user exposure to new features throughout the release process. They enable sophisticated targeting capabilities, allowing teams to expose features to specific user segments, geographic regions, or percentage-based cohorts while collecting detailed analytics on feature performance. The flags work in conjunction with traffic routing and monitoring systems to create a comprehensive release control system.

In progressive delivery, feature flags integrate with automated decision-making systems that can adjust feature exposure based on real-time metrics and predefined success criteria. For example, if error rates exceed thresholds or user engagement drops, the system can automatically reduce feature exposure or disable features entirely. This tight integration between feature flags and observability systems creates a responsive release process that can adapt to changing conditions without manual intervention.

Rolling Deployment offers simplicity and resource efficiency by gradually replacing instances of the old version with the new one, maintaining full capacity throughout the process. This approach requires minimal infrastructure overhead and provides automatic rollback capabilities if issues arise during deployment. However, rolling deployments can be risky for breaking changes since both versions temporarily coexist, potentially causing compatibility issues. The deployment process can also be slower for large applications, and there’s limited control over user exposure to the new version during the transition period.

Progressive Delivery provides superior control and risk mitigation through feature flags, canary releases, and blue-green deployments, allowing teams to precisely manage which users see new features and when. This strategy enables real-time monitoring, instant rollbacks, and gradual feature exposure based on user segments or metrics. The downside is increased complexity requiring additional tooling and infrastructure, higher operational overhead, and the need for more sophisticated monitoring and feature flag management systems. Choose rolling deployments for simpler applications with backward-compatible changes and limited infrastructure complexity, while progressive delivery is ideal for mission-critical applications, large user bases, or when you need granular control over feature releases and risk management.

What is an example of a rolling deployment strategy?

A rolling deployment strategy involves gradually updating application instances in batches across your infrastructure. For example, if you have 10 web servers running your application, instead of updating all servers simultaneously, you would update 2-3 servers at a time. The process begins by taking a small batch of servers offline, deploying the new version to them, bringing them back online, and then repeating this process with the next batch until all servers are updated. This ensures that most of your application remains available to serve traffic throughout the deployment process, as only a portion of servers are being updated at any given time.

How does rolling deployment compare to canary deployment?

Rolling deployment focuses on the technical process of gradually updating infrastructure instances, while canary deployment emphasizes controlled user exposure to new features. In rolling deployment, traffic naturally shifts as instances are updated with limited granular control over which users see the new version. Canary deployment, on the other hand, deliberately routes a small percentage of traffic to the new version while monitoring performance metrics before gradually increasing exposure. Rolling deployment is simpler to implement but provides less control over user experience, while canary deployment offers more sophisticated risk management through targeted user exposure and real-time monitoring capabilities.

What are the differences between rolling deployment and blue-green deployment?

Rolling deployment gradually replaces old instances with new ones in batches, maintaining service availability throughout the process but temporarily running multiple versions simultaneously. Blue-green deployment maintains two complete, identical production environments where traffic is switched entirely from the old version (blue) to the new version (green) at once. Rolling deployment requires fewer infrastructure resources since it updates existing instances incrementally, but poses risks during the transition period when both versions coexist. Blue-green deployment eliminates version coexistence issues and enables instant rollbacks, but requires double the infrastructure resources to maintain two full environments.

How do rolling deployment, blue-green deployment, and canary deployment differ?

These three deployment strategies differ primarily in their approach to risk management and infrastructure requirements. Rolling deployment updates instances gradually in batches, providing resource efficiency but with limited user control during transitions. Blue-green deployment maintains two complete environments and switches traffic entirely at once, offering instant rollbacks but requiring double the infrastructure. Canary deployment focuses on controlled user exposure by routing small percentages of traffic to new versions while monitoring metrics, providing superior risk management but requiring more sophisticated tooling. Rolling is simplest to implement, blue-green offers the cleanest rollback mechanism, and canary provides the most granular control over user experience and risk mitigation.

What is a progressive delivery deployment strategy example?

A progressive delivery strategy combines multiple deployment techniques with advanced monitoring and automated decision-making. For example, you might start by deploying new code using a rolling deployment while keeping new features disabled via feature flags. Then, you gradually expose the new features to 5% of users in a specific geographic region while monitoring error rates, response times, and user engagement metrics. Based on predefined success criteria, the system automatically increases exposure to 20%, then 50% of users. If metrics indicate problems—such as increased error rates or decreased user engagement—the system can automatically reduce feature exposure or disable features entirely without requiring a full deployment rollback. This approach provides fine-grained control over both infrastructure updates and user feature exposure throughout the release process.

Share this article

Explore further

Product

Full-stack experimentation: How not to screw it up

Research shows clear relationships between performance measures like Core Web Vitals and business metrics. Amazon found that a 100ms latency increase results in a 1% sales decrease, while Walmart discovered that a 1-second load time improvement leads to a 2% conversion increase. Google’s research revealed that when page load time increases from 1 second to […]