Unleash

Two ways to test software with feature flags

TL;DR

There’s no set way to approach feature flags in test suites. In this post we use broad strokes to describe two of the more common approaches. 

In the first, we’ll wrap a flag solution, then mock its functionality in a test environment. The second approach involves coupling the test suite directly with the flagging solution.

Each approach is pretty distinct. They address different needs, and your mileage may vary. Your path may need to be your own. Reach out to the Unleash team if you want some advice that takes into account the variables you’re working with. 

Introduction

At their core, feature flags are intended to provide quicker, more granular control over the behavior of software at runtime. 

This happens because the flags are able to distinguish one implementation from another within the same binary. Usually this is controlled through settings in the flag provider’s platform.

When you move the decision on what code to run from compile time to run time, you get less artifacts and less binaries.

Organizations embracing this new flow find the effects are felt across the entire software development pipeline: from the developer to the end-user and/or target platform. 

This is disruptive. That’s a good thing. It means that the team’s needs are being challenged. 

Deployments are no longer the absolute final say in how the platform is operating. Developers no longer need to be wary of committing changes to the trunk. 

Lead developers no longer have to worry so much about coordinating feature branch merges across teams. Perhaps most importantly, users no longer need to worry about whether an update will brick an app. 

This also means feature tests no longer need to be duplicated for different releases to gain accurate coverage. 

We’ve seen a ton of ways to test features with flags in the wild. Today we’ll focus on two of them:

  • The mock approach
  • The platform approach

The “mock” approach to testing feature flags

One way of testing involves creating an alternative instance of the flag solution – in other words, a “mock” flag solution. This way the test suite doesn’t depend on the actual feature flag platform while testing. 

Your control mechanisms stay closer to the current state of your feature flag solution. The flag solution itself takes a back seat to whatever flow the testing team is working with. 

Generally with any solution, a best practice is to wrap 3rd party (ideally any) solutions within an interface. This is to make sure future changes are less likely to make other aspects of your codebase do things you don’t want them to do.  Your interface will dictate your usage patterns. The implementation underneath only needs to meet those patterns. 

A big benefit is that you’re able to be confident with adopting major updates. You’ll also become better at fixing bugs and vulnerabilities with a lot more precision, without duplicating efforts.

For example, you could insert any library you want within a solution-specific “[Organization]’s flags” service or class. Your changes would remain isolated (orthogonal) no matter how complex you make your wrappers. 

For the testing group, this means that they are responsible for keeping track of the different flags that can be introduced. This also applies to actual values if you’re using multivariate flags through runtime control. 

Sounds complex? It sure is. Complexity is something that just happens when you decouple a flagging solution from a test suite. 

That said, the mock approach can be ideal for unit tests, where you assess a small portion of your application.

You can use it, for example, to test how a component is behaving when a flag is off. With multivariate flags, you can use it to test different variants. 

Testing feature flags using the “platform” approach

The “platform” approach does the opposite from the mock approach by directly integrating the test suite into the flagging solution.

Here, flags already exist in the software you’re testing, whether or not they’re wrapped. The platform is used in conjunction with the test suite to make sure your team tests all branches. 

The core players for this approach are environments and constraints.

Environments allow for unique targeting rules for the same set of flags and values. Constraints are another way to describe those targeting rules. Not all flagging solutions offer environments or constraints. 

Here’s what it looks like in action:

You have a flag in a top priority project called “feature x.” The project has multiple environments: “production,” “developer-a,” and “testing.”

In each environment, the same flags exist. Their states are independent from one another. This includes variables such as targeting.

This way developers can adjust flags as needed. The production environment remains focused on your customers and, well, production.

Your testing team is free to set custom rules in their own environment as they see fit. They can, for example, choose to mirror the production environment. They can also try out their own targeting based on their own needs. 

The team then sets the constraints as necessary to gain appropriate coverage. These determine the context where the team tests different flag variations. 

Some contexts can be very literal, such as “feature-x-on” being paired with true or false. Here, the constraints set targeting rules to process “feature-x-on.” The contexts can also mirror the production’s targeting rules, then create contexts mirroring real life in the test environment. 

You’ll find that platform testing is ideal for integration tests. It isn’t perfect, however, as it doesn’t work well with many cloud solutions. 

As it’s able to work offline, a near-side proxy like Unleash’s Edge can be great for bringing some of the better parts of platform testing to cloud users. More on that in an article coming soon. 

Caveats

The mock approach puts a bit more work into implementing the flagging solution. You’ll need to define wrappers and a strategy/state machine pattern, and that takes time.

Because flags are decoupled with the mock approach, you’ll need to be thorough in observing the state of your flags.

The platform approach does work out of the box without needing additional infrastructure. 

Sometimes you’ll need fast initialization and updates, however. A near-side proxy like Unleash Edge is great for meeting those needs. 

The proxy delivers values on behalf of the platform, among a ton of other benefits. As mentioned earlier, this includes the ability to work offline.

More on Edge soon, as well as a technical deep-dive on what each of these testing methods could look like.

 

Find out for yourself what open-source feature management can do

GET STARTED TRY OUR DEMO

Share this article