Streaming flags is a paper tiger
Streaming flags is lovely to see in a demonstration environment. It also doesn’t really have much use outside of niche cases.
For most users, polling at discrete intervals is more than enough for server-side implementation.
At the same time, client-side software usually finds an added benefit most from fetching flag values on both initialization and at discrete moments in the user experience.
What are we talking about exactly?
Flag streaming is a method of update delivery from the flag service to clients.
What’s distinct with this method is how it leverages a long-lived connection to the service. This is typically for the duration of the application’s life cycle.
The stream maintains a constant connection so that the service can send updates to client software instances as soon as changes are made.
This looks pretty great in demonstrations: On one side of the screen a flag changes. Then, almost immediately you see this change manifest on the other side of the screen, on another piece of software.
All this looks super impressive. But what actually happens in practice?
A bit about streaming
Streaming in itself is not exclusive to any particular platform. It is actually a well-established resource and is documented openly in a number of places.
Streaming is at its most useful when writing applications that need to be updated on demand. Some use cases include::
- chat applications
- stock tickers
- gaming applications
In these apps, instant delivery is a key part of the user experience. In most other use cases, streaming isn’t necessary, or even a benefit.
So why even bother with streaming?
The short answer is that it has to do with the limitations facing all SaaS offerings.
SaaS products are meant to run at a distance from your infrastructure. This means they inevitably come with a certain degree of latency and unreliability.
If you are bound to a system that is SaaS driven, then a streaming service is a good way to work around those limitations.
We’ve all had times when a site just isn’t running from our home network but somehow our mobile network works just fine. This is a great example of reliability issues ubiquitous with the modern internet experience.
With client software, connection quality control is simply out of a provider’s hands. Reliable connections can take time or fail outright at any given moment. This comes with being hosted by another party.
It’s easy to get distracted by the speed of a streaming connection, but really, this is misdirection. You’ll still run into the same challenges.
Sometimes a solution will attempt to make up the difference with massive investments in CDN and Edge services in the backend. Be prepared to see the cost of these investments in your own invoice.
Proxies instead of streaming
Another option is to run your infrastructure as an adjacent component.
If these platforms could run in your infrastructure as an adjacent component, then the priority in showcasing (or implementing) streaming tech would be completely unnecessary.
With this method, polling for discrete changes becomes trivial. Latency remains in the low millisecond range. Plus you won’t be competing for breathing room across the entirety of the world wide web.
This is why SaaS providers who are concerned with service reliability will offer a proxy-style solution. This is especially true when an on-prem alternative isn’t available.
A proxy, or “proxy-like,” solution allows for a number of benefits like:
- simplified network configuration
- increased security options
- service resiliency
That said, proxies come with a cost of ownership. If rolling your own proxy, it can be difficult to know what the ultimate cost of running as well as maintaining it, both in dollars and development time.
Thankfully, many solutions will have a proxy ready to go.
What about client software?
Now comes the time to discuss the elephant in the room.
All sounds fine for server-side matters, where adjacency makes many concerns of latency seem pedantic. However, most client software lives with services running across public networks, like through cell networks or cable providers.
These networks can be unpredictable. Performance reliability can’t be taken for granted. There isn’t really an advantage to streaming when a network stutters or goes down.
The better approach would be to simply initialize the client software on initialization, cache the values, and then run as normal. In other words, the same pattern that almost all service-dependent software already uses.
With this implementation in mind, even the need to poll–never mind streaming–pretty much becomes redundant.
In any case, it’s good practice to avoid time-bound interval updates. Instead, it’s better to set up deliberate events that trigger a value refresh when necessary.
Wrapping it up
Runtime control flags such as feature flags are outstanding mechanisms for a number of use cases. Real time intervals are, for the most part, not necessary for them to work well. In most cases, streaming has no benefit over polling.
If streaming is really your thing, that’s also totally fine. Streaming updates via server-sent events is not some arcane magic but is actually a well established mechanism.
In fact, if your system requires real-time updates, there’s a good chance the systems to implement those updates are already in your infrastructure. If not, setting up real-time updates in your system shouldn’t be terribly difficult.