Getting Started With the iOS Feature Flag (Objective-C)

Here’s another post I wrote for Rollout, this time about integrating their feature toggle system with an iOS app written in Objective-C. You can find the original post here.

 

Mobile apps are often tied to backend services and are expected to evolve in parallel with them.

Of course, the backend developers have it a lot easier. They can release a new version of their application pretty much anytime they wish. For many microservice frameworks, the ability to quickly push new API versions is a core feature. It’s expected.

iOS developers aren’t this fortunate. Regardless of how quickly they can turn a release around, everything needs to go through the App Store and users need to update the application. Both of these factors are a constant drag on the release cycle.

How can we work around these obstacles? With feature toggles.

Feature toggles (also known as feature flags) give us a mechanism for adding new behavior without deploying a new release. We can even enable or disable the new functionality when we wish.

So we add a new configuration parameter to our application, set it to “off,” and then flip it on when we’re ready, right? Well, there’s more to it than that. Using feature flags to effectively manage change and improve your user’s experience requires understanding feature flag strategies and tactics.

In this tutorial, we’ll add a feature toggle to a simple iOS app, first as a configuration parameter and then as a managed flag that will demonstrate how to avoid creating technical debt while adding the ability for your application to evolve quickly.

This tutorial assumes familiarity with basic iOS concepts and the ability to set up and use Xcode. The source code for the complete project is available here.

A Basic Feature Toggle

We’ll start with the single view iOS application offered in Xcode.

 

select xcode project for feature flags

Then we’ll add a UILabel to the application’s single view.

Add label to Swift feature toggle application

Next, we need to define the label in ViewController.h with the name greetingLabel.

Finally, we’ll set a value for the label in ViewController.m. Add the assignment in viewDidLoad.

Of course, setting the default value in the UILabel definition is another way to do this, but we’ll define it explicitly throughout the tutorial to make the behavior easy to see.

We’re done setting up the sample application. Let’s build it and run.

Test Swift iOS application

 

Now we can use a feature toggle to modify this application’s greeting.

An iOS Feature Toggle

Let’s imagine we want to replace our greeting with a different one depending on the season. The least common denominator is a boolean flag that toggles one string or another.

So let’s add some logic to how we set the greeting.

Now build and run this version of our application.

Feature toggle in Swift iOS application

We have our first feature toggle!

But this means modifying, building, and getting a release through the App Store whenever we want to change our greeting. It’s also a potential form of technical debt. What happens to these boolean flags down the road? Do they stay in the code, all set to true?

Using a configuration file would at least shave a little bit of time and effort off the release cycle.  We’ll create a property list and control our toggle with a configuration property.

Click your project’s name and then click File|New in the main menu.

 

feature flag objectiv-c add property file

Click on property list and then Next.

feature toggle objective-c name property list

Name the file as shown and click Create.

Next, add a property name isNewYear.

feature flag set property

 

 

We now have a property list named featureflag.plist. In it, there is one parameter: a boolean flag named isNewYear.

We’ll load it and use the property to set our flag. Rather than encapsulate the configuration code in another object or the AppDelegate, we’ll load it right where we use the feature to keep things simple.

Now we can toggle the value of isNewYear to YES or NO via the properties file and we’ll see the proper greeting.

The plist gets us out of modifying code when we want to toggle a feature, but we still need to build and release for each change. We’ll also accumulate debt in the form of unused configuration properties unless we remember to remove them in future releases.

Let’s look at a better way to change—and more importantly—manage feature toggles.

Getting Started with Rollout

First, before you can follow along, you’ll need to create a free Rollout account here.

After you’ve finished registering, sign in to your Rollout account and create a new application.

Rollout new Feature Toggle iOS Application
Be sure to select iOS, as shown above.

Next, you’ll see instructions for adding Rollout to your application.

Install feature toggle api iOSFirst, download rox.zip, unzip it and add the rox directory to your project’s directory tree.

Next, add RoxSdk.xcodeproj and ROXCore.framework to your project.

Then select Add Files to… from the project navigator.

add files to projectNow select the files from the RoxSdk directory in the zip file output.

select rollout files

 

You’ll see ROXCore.framework icon in the project navigator. Add those files.

Next, we need to remove the framework from the project’s linked frameworks. Select your project’s name at the top of navigator tab.

Select feature flag project

 

 

Scroll down in the application information page to Linked Frameworks and Libraries.

Xcode linked frameworks

 

 

Next, select ROXCore.framework and then click the minus to remove it.

We want to add it back to embedded binaries. Click the plus in Embedded Binaries.

embedded binaries

Then select ROXCore.framework and ROX.frameworkiOS and click Add.

Last, we need to set up a build step to strip out unneeded libraries from our production application.

First, add a script build phase.

Add script

Then locate the strip_frameworks.sh script.

Locate script

And drag it into the script target.

Drag in script

Initialize the API

The Xcode project is ready now and we can initialize the API.

Finally, click Setup SDK back at the Rollout dashboard.

Setup Feature Toggle SDK

The instructions in this dialog are for Swift. We’ll need to make some adjustments for Objective-C.

First, include the import statement for the RoxCore.h at the top of AppDelegate.m.

Then add the call to ROX setupWithKey to didFinishLaunchingWithOptions. Pass it the unique initialization key Rollout assigned to our application.

The top of AppDelegate.m should look similar to this:

Now we can run the application to initialize Rollout. Click Activate App on the Rollout dashboard. After the application builds and runs, you are rewarded with this:

Feature flag success

Add a Managed Feature Flag

Now we can add a managed feature toggle. If we look at our Rollout dashboard and check flags, we see this:
Create Spring Feature Toggle image
There are no flags and there is no way to add one. That’s because we create them from our application, not from the dashboard.

In Objective-C, feature flags are ROXFlags and are managed in RoxBaseContainers.

First, add a header named MyContainer.h to the project.

Then add the corresponding implementation.

This gives us a single flag named isNewYear with a default value of false. It’s managed in MyContainer, which we need to register with Rollout.

We register the container with the API before calling ROX setupWithKey.

Modify AppDelegate with the calls to create and register the container.

We pass two arguments in the message: a name for the container and an instance.

Run the application and then go to the Flags menu item in the Rollout dashboard.

Feature Flag in Rollout DashboardThe new flag is there. Both the container name and the flag are on the dashboard.

Reading a Managed Feature Flag

Next, we’ll update the application to use the RoxFlag instead of a property from a property list.

ROX retrieves values asynchronously and caches them. On startup, it extracts the latest value from the cache and initiates a fetch from the network in parallel. It will also refresh the cache when the application regains focus.

This means that the first time we run our application, we will immediately get the default value for our flag (false for toggles unless we override) and then the value set at Rollout upstream will be retrieved later.

We want to see the value of the flag before we load our view, so our label is up-to-date when the application opens.

We’re going to create a configuration object for ROX and pass it to ROX setupWithKey.

First, we create a RoxOptions. This holds configuration settings that we pass to setupWithkey.

One of the possible options is a callback message that ROX sends when the latest configuration is fetched: onConfigurationFetched. It is passed a ROXFetcherResult that indicates whether or not the fetch was successful.  In this example, we are assuming it was successful.

When ROX sends the message, we use NSNotificationCenter to notify any interested objects that new information has been retrieved.

There are a few advantages to using NotificationCenter instead of trying to call our view directly:

  • Since this callback will be called on startup, our ViewController probably hasn’t been created yet.
  • If we add more controllers to the application, later things will start to get complicated.
  • Threading issues are delegated to listeners (as we’ll see).

Next, we’ll modify ViewController to process the update message and set the label.

ViewDidLoad is now subscribing to update messages and delegating them to a method named updateLabels.

We moved control of our label to updateLabels, where we get a reference to AppDelegate to access its copy of MyContainer and use it. However, we have to dispatch this task to the main thread because only the UI thread can modify the view.

Finally, isNewYear provides us with a block we use to set the label appropriately.

Managing a Feature Flag

We manage flags by adding them to experiments. An experiment is a set of criteria for controlling flags (and other variables) in production.

First, click on Production in the left-hand side menu and then click Experiments. This will bring up a screen with a Create Experiment button. Click that and fill out the new experiment window appropriately.


Select Set Audience.

 

And we see a console for setting flags to true, false, or split. Leave the flag set to False for now.

Run the application and you see “Hello, World!”

Next, change the flag to True on the dashboard and restart the application.

Feature toggle in Swift iOS application

We see the Happy New Year greeting!

Flip it back to False and instead of stopping the app, click the phone’s home button.

Then tap on the application icon to open it again. The label now says Hello World!

We can change the behavior of the application without touching code and releasing a new version. We can even alter behavior while the application is running on a client’s device.

Before we move on, let’s take a look at the experiment on the console again. Flip the flag from False to Split.

Split Settings for a Feature Toggle

This option is where the term “experiment” comes from. We can distribute a flag’s value to a percentage of users. It’s a mechanism for implementing A/B testing in your application.

This is the power of feature flags!

Extra Credit: Configurations

Tying the contents of a label to a true/false flag isn’t what we would do in a production application but it kept the introduction to a boolean flag focused and simple.

Let’s take a look at a better mechanism for setting a label. Rollout supports managing configuration values from the dashboard.

Let’s add a new field to MyContainer.

First, add a property to the MyContainer’s header file.

Then initialize it in the implementation file.

RoxConfigurationString is precisely what it sounds like. It’s a string that is controlled by Rox.

Build and run the application again and then go back to the Rollout dashboard.

Under Production and Configurations, we see our new value.

Feture toggle configuration

If we click on it, we can set it to a value like so:

set feature toggle value

 

Select >= for the criteria and enter “Happy New Year!” (without quotes).

Then we need to make a small code change. Change updateLabels to use the value of our new property.

And run the app. We see Happy New Year!

Let’s change it.

Set Feature Toggle to w00t

Close and open the application. No need to restart or rebuild: our notification mechanism will handle the change.

Feature toggle developers say w00t!

We can change our application config with Rollout!

Conclusion

This guide demonstrates how to get started with Rollout with iOS and Objective-C. The API supports any iOS platform that runs Objective-C, including Apple TV. Rollout’s documentation has more information about how you can do a great deal more with flags, configuration settings, experiments, versions, and grouping users.

We’ve written a simple application that demonstrates how feature flag management can help you manage your projects and eliminate unnecessary releases. Try it out on your own!

 

Photo credit:  Liam Desic

 

Agile Release Train Smells – The Most Common Mistakes

Here’s a post I wrote for EnoV8 a while back. You can find the original here.

Whether your organization is starting an agile transformation now or is well on its way, there are always pitfalls.

Scaled Agile Framework (SAFe) is a big leap for most organizations. After all, even if some of your development teams were already using agile methodology before the change, SAFe is an enterprise program with its own risks and rewards.

Let’s look at some of the most common —and most painful— mistakes.

Teams Blocked on External Resources

An Agile Release Train (ART) is “a virtual organization that plans, commits, and executes together.” SAFe describes the team as cross-functional, composed of people with the skills needed to execute their mission. But does that always happen?

Large organizations can be, well, a little balkanized. Specialized skills such as security, networking, and database management often report into different reporting lines, or management cloisters them into Centers of Excellence that produce policy and procedure instead of working with teams.

So, release trains end up stopped on a siding waiting for external resources instead of executing projects.

SAFe places shared services specialists into pools of resources, rather than dedicating them to specific ARTs. SAFe recognizes that these resources are expensive and that avoiding duplication and introducing new specialties with Centers of Excellence benefits enterprises. However, these resources are part of the release train, taking part in planning, execution, and feedback. If they remain siloed as external resources and answerable to different priorities, trains will inevitably end up blocked.

If external resources block your teams, maintaining cadence will be near impossible.

Development and Release Concerns Intertwined

The basic building block of agile development is the iteration. SAFe organizes these components into Program Increments (PI). We achieve cadence by organizing and executing these PIs into predictable intervals.

A regular cadence makes planning simple. If a feature doesn’t make it into an increment, its delivery can still be predicted by stakeholders since the next PI will begin and end in a predictable timeframe.

Meanwhile, releases run on their own schedule. An organization may issue a new release at the end of each PI, less frequently, or even in the midst of a PI, depending on their unique needs.

There is a crucial separation of concerns here; don’t intertwine Program Increments and releases. They can and should be planned well in advance, and perhaps in parallel, but the ART is about maintaining cadence and iterating through PIs. Its role is not to feed a release schedule or change direction midstream to make a date.

SAFe’s structure ensures that teams deliver new value regularly. We achieve this with planning and execution, not a weeks-long death march at the end of each quarter.

Companies accustomed to Waterfall can have difficulties grasping this separation of concerns. Once something is finished and (hopefully) tested, the urge to release can be almost irresistible, leading to cadence-breaking distractions and out-of-band feedback.

And then there is the critical new feature. A competitor releases something new or a new technology comes on the scene, and we need to get our answer out now. This was a problem in the Waterfall methodology, and it still is with Agile.

If releases and PIs are intertwined, development will suffer.

Broken Loops

Agile’s tight loops are an essential part of iterating effectively. Sprints timed at regular intervals enhance sustained effort, optimized with constant feedback. Steady feedback means short, productive, and focused, scrums, retrospectives, and planning sessions.

Breaking this feedback means diminished or maybe even foundering effort.

Teams may defer or even skip testing entirely in the name of finishing an iteration on time, or so they can move on the next more interesting thing. This isn’t a new problem or one that is unique to an ART, but it is one that will break a feedback loop.

Reorganizing a development during a project isn’t a new problem either. As long as there have been large organizations, there have been better organization charts. Breaking up a team will always cause delays. With an ART, it’s a derailment and a broken loop.

Steady cadence and tight loops require short sprints and reasonable program increments. Long sprints or overfull increments that try to fit 10 pounds of requirements in a 20-pound bag can break up cadence and diminish the usefulness of feedback.

Similarly, terminating a sprint, or worse a PI, and redirecting effort means breaking cadence completely. This concern is similar to overlapping release and development concerns.

The larger the organization, the higher the number of possible reasons for open loops and broken cadence. SAFe operates in the House of Lean. There’s a reason for that.

Bureaucracy

Complaints about the excess bureaucracy in large enterprises are common enough that they are almost a cliché. And they almost are, but not entirely.

SAFe works best when ARTs operate as small and focused agile teams. Once they have set the scope of their PI, it’s time to get to work and limit meetings to scrum and reporting to project tracking tools. When the PI is over, it’s time to review the results, set a new scope, and get back to work.

Traditional organizations struggle with letting go of authoritative management models. Sometimes they fall into a “dotted-line” scheme or create a series of hurdles erected in the name of “governance.” These efforts can stymie progress by creating unnecessary meetings and paperwork.

Wrong Focus

Agile often leads to increased efficiency and for many, and improved efficiency always points to an opportunity to reduce costs.

We already touched on how limiting access to specialists and shared services with reporting lines can sabotage SAFe efforts. Understaffing them in the name of containing costs and walling them off in a Center of Excellence that can’t do much more than issue policies and papers serves no one.

ARTs need to have cross-functional staff and operate as independently as possible, but this can’t come at the cost of quality, security, and technical depth. Efficiency and agility are competitive advantages, not cost-containment measures.

An unhealthy focus on costs also tends to lead to authoritative management and the bureaucracy that we discussed above. While teams appear to be able to operate independently, a desire for compliance and predictably turns into distractions and a drag on productivity that sacrifices agility for cost savings.

Conclusion

These mistakes have a common thread: not committing to the process and compromising SAFe’s core values. SAFe has built-in flexibility with four basic configurations and plenty of room with procedures that allow for different needs. But when we compromise the core values, the process will fail.

If you are interested in learning more about implementing an Agile Release Train in your organizations, then speak to Enov8 about Enov8 Release Management. Enov8 RM is a complete platform that takes you across the Release Spectrum from ART (enterprise program-centric) through to Implementation Planning (typically project-centric), System Deployment Operations (system and component-centric), and Automation.

(Photo by Lee Hull on Unsplash)

How to Measure Lines of Code? Let’s Count the Ways

Here’s a post I wrote for NDepend a while back.  The original post is here, on their blog. NDepend’s tools for .Net are very cool and made me jealous. We don’t have anything close to that powerful for Java.

This post was a lot of fun to write, since counting lines of code is one of those things you don’t think about until you start to think about it. Then you can’t stop.————-

There are a few ways to count lines of code, and they each have their advantages and disadvantages.

Much of the differences come down to defining what a “line” is. Is a line a literal line in the source file, a logical statement in the language we’re using, or an executable instruction?

Let’s take a look at three metrics:

  • Source lines of code—the number of lines of code in a method, skipping comments and blank lines
  • Logical lines of code—the number of statements, ignoring formatting  and often counting a line as more than one statement
  • IL instructions—the number of instructions that the code compiles to

Is one better than the other? It depends on what you’re trying to measure.

Source Lines of Code

The most direct way to count lines of code (LOC) is to, well, count lines of code.

Our IDE tells us how many lines of text a file has and displays a count in one of the margins. It’s a useful metric to have: a quick way to see how long a given method is or object has. It gives us an ongoing indicator as to when things might need to be broken down into smaller parts or refactored a bit. Chances are we have a feel for when things are getting too long, but seeing an ongoing count on the side of the screen often helps.

It’s also relativity easy to use an external tool such as wc from GNU Coreutils or any one of many other utilities to get a quick count of the number of lines in a group of files.

But regardless of how we count raw lines of code, we’re still including comments, package statements, using statements, and even blank lines. If we’re working in our editor, we have to do some quick arithmetic in our heads (or on a post-it) if we want an accurate count. If we’re inheriting a large codebase, lines of code might be indicative of the sheer scale of what we’ve just been handed, but it’s not necessarily a good indicator of complexity. Maybe the previous owner liked vertical white space and completely qualified namespaces.

So the next step is to try to filter out comments and empty lines and count what is often called source lines of code (SLOC.) This is more accurate than a raw count, but there’s still some built-in uncertainty. We’re still counting package and using directives, which is interesting, but this can all be considered noise.

And of course, there’s the matter of coding style. Is this two lines of code or four?

Should it be counted differently than a file written by someone that prefers K&R-style indenting?

Logical Lines of Code

Rather than trying to count lines that contain code, a different approach is to calculate statements.

A simple algorithm for C/C++, Java, and C# is to count semicolons rather than carriage returns. This method filters out comments and blank lines and renders different formatting conventions moot. But it introduces a few quirks of its own. A for loop is counted as two statements, while a while loop doesn’t count. Neither would all of the parts of switch and if/then/else statements.

This leads to developing a program that understands the language it’s counting. It has to recognize keywords and intelligently tally them.

Answering the question of how many lines a for loop is worth means assigning weights to each keyword. Logically, a for loop is worth whatever a while loop is, which is the same as a do/while. Very quickly, counting lines becomes parsing source code.

There are many tools for counting logical lines of code. A quick Google search brings up at least a dozen in various states of disrepair.

That’s because there’s a better way to count statements, especially if you’re working in the .NET environment.

IL Instructions

Windows .NET compiles code into Microsoft’s Common Intermediate Language or CIL. Counting instructions in CIL avoids many of the problems presented by trying to count by parsing source files.

The most apparent advantage is there are no longer any formatting questions. While IL is still human-readable, it’s machine-generated with consistent formatting.

Counting IL statements gives us a reliable metric without having to worry about the specifics of the high-level language. Rather than counting lines of source code, we measure the number of executable statements our code generates in the runtime environment. So, the question of how to count different styles of loops and comparisons is answered. We do it by counting the number of instructions each of these constructs compiles to.

There are things to consider when we’re using this metric, though. While IL is human-readable, it’s not a high-level language like C#. One line of code in C# is often many more than one in IL.

At the same time, IL does not include essential parts of our code. Interface definitions and abstract methods don’t compile directly to code, but they do contribute to complexity and human overhead.

NDepend’s NBLinesOfCode

NDepend combines IL instructions with your source code to give you an accurate count of logical lines of code in a method.

Program database (PDB) files contain sequence points that correspond to locations in the source file for operations such as setting breakpoints in the debugger. They also provide NDepend with a way to correlate lines in the source file with executable code.

NBLinesOfCode is a correlation, not just the count of sequence points relabeled as something else. NDepend takes each sequence point in the PDB, examines the associated line in the source code, and then counts the line if it’s relevant. For example, curly braces entering and exiting a loop can have a breakpoint assigned to them, but they’re not a line of code.

Since NDepend derives the count from IL, it doesn’t include interfaces, abstract methods, or enumerations. It’s a count of lines associated with executable code only.

This metric is a best-of-both-worlds approach; it completely bypasses issues with differences in formatting while providing a very accurate count of how many executable statements a unit of code contains. Since NDepend makes it simple to see this number for individual methods, it’s easy to detect when a method is doing too much.

Comments

NDepend also uses the PDB to identify comments. It calculates a count of comments for methods, types, namespaces, and assemblies and makes the count available as NbLinesOfComment. The number calculated for each item includes the comments in their implementation or definitions—in other words, in between the braces—so comments outside of these spaces don’t count.

For convenience, PercentageComment is also available. The formula is what you would expect:

NBILInstructions

The number of IL instructions is available with NBILInstructions. This count varies based on compiler options and can be orders of magnitude higher than the number of lines of code. According to NDepend, a seven-to-one ratio of instructions to lines of code is a decent one to aim for.

Conclusion

In this post, we went over a few different ways to count code and the advantages and disadvantages of each approach. Then we saw how NDepend provides three metrics that help us measure lines of code and comments.

So what should you do with this knowledge? Well, rather than simply watching a number increment in your editor as you write, decide what you need to measure. Then you can use that information to improve your code.

Docker Tutorial: Get Going From Scratch

This is a post I wrote for Stackify a while back. You can find the original here. Docker is one of the most exciting technologies I’ve seen in a long time. I enjoy working with it.

Docker is a platform for packaging, deploying, and running applications. Docker applications run in containers that can be used on any system: a developer’s laptop, systems on premises, or in the cloud.

Containerization is a technology that’s been around for a long time, but it’s seen new life with Docker. It packages applications as images that contain everything needed to run them: code, runtime environment, libraries, and configuration. Images run in containers, which are discrete processes that take up only as many resources as any other executable.

It’s important to note that Docker containers don’t run in their own virtual machines, but share a Linux kernel. Compared to virtual machines, containers use less memory and less CPU.

However, a Linux runtime is required for Docker. Implementations on non-Linux platforms such as macOS and Windows 10 use a single Linux virtual machine. The containers share this system.

Containerization has enjoyed widespread adoption because of its

  • Consistent test environment for development and QA.
  • Cross-platform packages called images.
  • Isolation and encapsulation of application dependencies.
  • Ability to scale efficiently, easily, and in real time.
  • Enhances efficiency via easy reuse of images.

We’ll look at these basic concepts as we install the Docker tools, and create images and containers.

Get Started with Docker

We’ll start by installing the Docker desktop tools found here. Download the correct installer for your operating system and run the installation.

Running a container

Once we install the tools, we can run a Docker image:

output of docker hello-world image

docker run hello-world does exactly what it sounds like. It runs an image named “hello-world.”

Docker looks for this image on our local system. When it can’t find the image, Docker downloads it from Docker Hub for us.

Hello-world displays a message telling us everything’s working. Then it spells out the process for us before recommending some additional steps.

Under the covers

Let’s take a look at a few more Docker commands that tell us more about the environment.

docker ps -a lists the containers on our system:

output of docker ps -a

From this, we can see that the hello-world container is still in memory. The status column tells us that it’s exited. The names column has a name, kind_bose, that Docker assigned to the container for us. We’ll cover container names below.

Let’s run this image again with docker run hello-world. The output is almost the same…

output from docker hello world

…except this time we don’t see information about downloading the image. It was already available on our system.

But what does docker ps -a show us now?

output of docker ps -a with 2 containers

We see two stopped instances of hello-world, with two different names. Docker created an additional container. It didn’t reuse the first. When we told Docker to run an image named hello-world, it did exactly that; it ran a new instance of the image. If we want to reuse a container, we refer to it by name.

Reuse a container

Let’s try starting one of the stopped containers:

ouput of restarting hello world container

This time, we used docker start –attach  instead of docker run. We use the start command, and rather than naming the image, we specify the name of a container that’s already loaded. The –attach tells Docker to connect to the container output so we can see the results.

We stop containers with docker stop  and remove them with docker rm . We’ll take a look at that below when we work with applications designed to keep running in the background.

If we check docker ps again, we still see two containers.

Let’s run a container that doesn’t exit immediately. Hello-world’s instructions gave us an interesting example:

output from Ubuntu container

With a single Docker command, docker run -it ubuntu bash, we downloaded an Ubuntu Linux image and started a login shell as root inside it. The -it flags allow us to interact with the shell.

When we open another window and list containers, we see a different picture:

docker ps -a with running container

The Ubuntu container’s status is Up. Let’s see what’s going on inside:

docker top

docker top looks inside the container and shows us the running processes. The Ubuntu container is running a single process—the root shell.

Let’s look at one last Docker command before we create a container of our own:

docker image ls

Docker image ls produces a listing of images on our system. We see Ubuntu and the single hello-world image since we only needed that single image to run two containers.

Share system resources with a container

So far, we’ve run a couple of self-contained images. What happens when we want to share local resources from our host system with a container? Docker has the ability to share both the file system and the networking stack with containers.

Let’s create a web server that serves a web page from the local filesystem. We’ll use a public Nginx image.

First, we need an HTML file to display when we connect to the web server. Start in an empty directory that we’ll call my-nginx and create a single subdirectory named html. Inside html, create index.html:

 

 

Hello, World!

 

 

We’re ready to go. Here’s our command line:

$ docker run -v /full/path/to/html/directory:/usr/share/nginx/html:ro -p 8080:80 -d nginx

When we execute this command line, we see Docker download the Nginx image and then start the container.

We used four command line options to run this container:

  • -v /full/path/to/html/directory:/usr/share/nginx/html:ro maps the directory holding our web page to the required location in the image. The ro field instructs Docker to mount it in read-only mode. It’s best to pass Docker the full paths when specifying host directories.
  • -p 8080:80 maps network service port 80 in the container to 8080 on our host system.
  • -d detaches the container from our command line session. Unlike our previous two examples, we don’t want to interact with this container.
  • nginx is the name of the image.

After executing this command, we should be able to reach the web server on port 8080:
Our test page in Chrome
We see our test page! You can also access the page from our devices on your network using your host system’s IP address.

When we ran the Nginx image, we needed to tell it where to get the web files. We did this by mounting a directory on our host system to a directory inside the container, overriding the files that are already inside the image. Docker also supports volumes, which can contain filesystems and be shared between containers.

We also needed to map port 80 in our container to a port on our host system so the web server can communicate with the outside world. Containers don’t automatically have access to the host network. With our port mapping directive, the container can be accessed via the host network. Since we only mapped this port, no other network resources are available to the container.

This exercise illustrates one of Docker’s key advantages: easy reuse of existing resources. We were able to create a web server in minutes with virtually no configuration.

Stop and remove a container

Our web server is still running:

list docker container

We can stop it with docker stop

$ docker stop compassionate_ritchie

…and remove the container with docker rm.

$ docker rm compassionate_ritchie

After running these two commands, the container is gone:

stop and remove docker container

Create a Docker image

Now let’s build on this example to create an image of our own. We’ll package the Nginx image with our html file.

Images are created with a Dockerfile, which lists the components and commands that make up an image.

In my-nginx, create a Dockerfile:

FROM nginx

COPY html /usr/share/nginx/html

This Dockerfile contains two instructions:

  1. First, create this image from an existing image, which is named nginx. The FROM instruction is a requirement for all Dockerfiles and establishes the base image. Subsequent instructions are executed on the base image.
  2. The second instruction, COPY, tells Docker to copy our file tree into the base image, overriding the contents of /usr/share/nginx/html in the base image.

Next, build the image:

$ docker build -t mynginx .

Sending build context to Docker daemon 3.584kB

Step 1/2 : FROM nginx

—> b175e7467d66

Step 2/2 : COPY html /usr/share/nginx/html

—> Using cache

—> a8b02c2e09a4

Successfully built a8b02c2e09a4

Successfully tagged mynginx:latest

We passed two arguments to build:

  • -t mynginx gave Docker a tag for the image. Since we only supplied a name, we can see that Docker tagged this build as the latest in the last line of the build output. We’ll look more closely at tagging below.
  • The final argument, dot (or “.”), told Docker to look for the Dockerfile in the current working directory.

The build output shows Docker using the nginx image and copying the contents of html into the new image.

When we list images, we can see mynginx:
docker image ls with view of our new image

Run a custom image

Next, we run our new image:

$ docker run –name foo -d -p 8080:80 mynginx

Let’s break that command down.

  • –name foo gives the container a name, rather than one of the randomly assigned names we’ve seen so far.
  • -d detaches from the container, running it in the background, as we did in our previous run of Nginx.
  • -p 8080:80 maps network ports, as we did with the first example.
  • Finally, the image name is always last.

Now point your browser at http://127.0.0.1:8080 and you can see the test web page again.

While the web server is still running, let’s take a look at docker ps:

docker ps with web app running

We can see that the ports column has the mapping we used to start the container, and names displays the container name we used.

We’ve created a self-contained web server that could easily contain a complete set of web documents instead of only one. It can be deployed on any platform that supports Docker.

Create a more customized image

Each Docker image executes a command when it’s run. In our Nginx Dockerfile, we didn’t define one, so Docker used the command specified in the base image.

Let’s try a slightly more complicated image that requires more setup and a specific command instruction.

Start in another empty directory. This time, we’ll create two new text files.

First, we’ll create a small Python script named app.py:

from flask import Flask

import os

import socket

app = Flask(__name__)

@app.route(“/”)

def hello():

html = ”

Hello {name}!

Hostname: {hostname}

return html.format(name=os.getenv(“NAME”, “world”), hostname=socket.gethostname())

if __name__ == “__main__”:

app.run(host=’0.0.0.0′, port=4000)

This script creates a web server listening on port 4000 and serves a small HTML document with a greeting and the container’s hostname.

Next, we’ll create a Dockerfile:

# Use an official Python runtime as a parent image

FROM python:2.7-slim

WORKDIR /app

ADD . /app

RUN pip install –trusted-host pypi.python.org Flask

ENV NAME World

CMD [“python”, “app.py”]

This Dockerfile starts with an image that contains a Python runtime. We can see from the name that it provides version 2.7 in a slim configuration that contains a minimal number of Python packages.

Next, it establishes a WORKDIR (working directory) named /app and ADDs the current working directory to it.

After adding the script to the image, we need to install the Flask Python package, the library we use for the web server. The RUN instruction executes pip install for this. Dockerfiles can run commands as part of the image build process.

Next, it sets the environment variable NAME, which is used in the HTML page returned by app.py

And finally, the Dockerfile specifies the command to run when the image is run. CMD accepts a command and a list of arguments to pass to the command. This image executes the Python interpreter, passing it app.py.

Let’s build this image:

$ docker build -t mypyweb .

Sending build context to Docker daemon 4.096kB

Step 1/6 : FROM python:2.7-slim

—> b16fde09c92c

Step 2/6 : WORKDIR /app

—> Using cache

—> e8cfc6466e29

Step 3/6 : ADD . /app

—> Using cache

—> b0ed613be2d4

Step 4/6 : RUN pip install –trusted-host pypi.python.org Flask

—> Using cache

—> 255f51709816

Step 5/6 : ENV NAME World

—> Using cache

—> d79d78336885

Step 6/6 : CMD [“python”, “app.py”]

—> Using cache

—> 687bc506dd46

Successfully built 687bc506dd46

Successfully tagged mypyweb:latest

Run our Python image

$ docker run –name webapp -p 8080:4000 mypyweb

Let’s navigate to 8080 again with a browser:

Python web page in Chrome

We see our new web page. We’ve created another portable web server with just a few lines of Python!

Pass environment variables

Our Dockerfile set an environment variable…

ENV NAME World

…which the Python script uses in this greeting:

html = ”

Hello {name}!

Hostname: {hostname}

We can override this variable from the command line:

$ docker run –name webapp -p 8080:4000 -e NAME=”Dude” mypyweb

Then look at the web page again:

web oage with different greeting

Share an image

As we’ve been running images and using them as the basis for our own, we’ve seen Docker download them from Docker Hub:

Step 1/6 : FROM python:2.7-slim

2.7-slim: Pulling from library/python

b0568b191983: Pull complete

We can upload our own images to Docker Hub for distribution, too.

The first step is to create an account on Docker Cloud. If you don’t already have an account, go and create one.

Next, we’ll log in to the Docker registry:

$ docker login

Username: ericgoebelbecker

Password:

Login Succeeded

We’ll upload mypyweb to Docker Hub.

Before we do that, we should tag it. The format for Docker tags is username/repository:tag. Tags and repository names are effectively freeform.

$ docker tag mypyweb ericgoebelbecker/stackify-tutorial:1.00

If we list our images now, we see this tag:

REPOSITORY TAG IMAGE ID CREATED SIZE

ericgoebelbecker/stackify-tutorial 1.00 0057736e26ce Less than a second ago 150MB

mypyweb latest 0057736e26ce Less than a second ago 150MB

mynginx latest a8b02c2e09a4 41 hours ago 109MB

nginx latest b175e7467d66 4 days ago 109MB

python 2.7-slim b16fde09c92c 3 weeks ago 139MB

Note that our image tag and mypyweb have the same image ID and size. Tags don’t create new copies of images. They’re pointers.

Now we can push the image to Docker Hub:

$ docker push ericgoebelbecker/stackify-tutorial:1.00

The push refers to repository [docker.io/ericgoebelbecker/stackify-tutorial]

7d7bb0289fd8: Pushed

acfa7c4abdbb: Pushed

8d2f81f035b3: Pushed

d99e7ab4a34b: Mounted from library/python

332873801f89: Mounted from library/python

2ec65408eff0: Mounted from library/python

43efe85a991c: Mounted from library/python

1.00: digest: sha256:e61b45be29f72fb119ec9f10ca660c3c54c6748cb0e02a412119fae3c8364ecd size: 1787

docker push accepts a tag name and pushes it to the default repository, which is Docker Hub.

Now, if we visit our account area on hub.docker.com, we can see the new repository, the image, and the tag:

Docker Hub Repository Page

If you look closely, you’ll notice a size discrepancy. This is because the image on Docker Hub only contains the changes from the Python:2.7-slim image it’s based on.

We can pull the image down and run it from any system:

$ $ docker run -p 8080:4000 –name webapp -e NAME=”Docker Hub” ericgoebelbecker/stackify-tutorial:1.00

Unable to find image ‘ericgoebelbecker/stackify-tutorial:1.00’ locally

1.00: Pulling from ericgoebelbecker/stackify-tutorial

b0568b191983: Pull complete

55a7da9473ae: Pull complete

422d2e7f1272: Pull complete

8fb86f1cff1c: Pull complete

9b622183190d: Pull complete

cf5af0f3fb51: Pull complete

3292695f8261: Pull complete

Digest: sha256:e61b45be29f72fb119ec9f10ca660c3c54c6748cb0e02a412119fae3c8364ecd

Status: Downloaded newer image for ericgoebelbecker/stackify-tutorial:1.00

* Running on http://0.0.0.0:4000/ (Press CTRL+C to quit)

This is the output of run on a different system from the one I built on. Similar to the way we ran hello-world, we passed the image tag to docker run. And since the image was not available locally, Docker pulled it from Docker Hub and Python:2.7-slim, assembled the image, and ran it.

We published the image, and it’s now publicly available from Docker Hub.

Conclusion

Docker is a powerful platform for building, managing, and running containerized applications. In this tutorial, we installed the tools, downloaded and run an off-the-shelf image, and then built images of our own. Then we published an image to Docker Hub, where it can be downloaded and run on any Docker-enabled host.

Now that you understand the basics, keep experimenting and see how you can use Docker to package and distribute your applications.

Getting Started With the Swift iOS Feature Flag

This is another post I wrote for the Rollout blog. You can find the original right here.

This post was a lot of fun to write since I hadn’t done any IOS development in a few years.


Mobile users expect more. They want their applications to be as stable as yesterday while looking and acting more and more like tomorrow. How do you keep up? How can you reliably test and push new features without risking a disastrous release?

Feature toggles (also known as feature flags) are one way. They provide app developers with a way to add new behavior to an application and then enable or disable it without deploying a new release.

So how do feature toggles work? Add some new behavior to the application, wrap it in a boolean, default it to “off,” and then flip it when it’s safe, right?

There’s more to it than that. Effectively using feature flags requires understanding feature flag strategy and tactics.

I’ll get you to your first iOS feature flag quickly, and then give you a deeper understanding of feature flags and how to avoid creating technical debt when you are using them.

This tutorial assumes familiarity with basic iOS concepts and the ability to set up and use Xcode. While I’ll use an iPhone simulator for testing the application, you can use whatever method you feel most comfortable with. I’ll be using the latest version of Xcode, which is 9.3.1 at the time of this writing, and Swift 4.1.

The source code for the completed project is available here.

A Basic Feature Toggle

Let’s start with the basic single view iOS application offered by Xcode 9.

select xcode project for feature flags

We’ll add a Label to the application’s single view like below:

Add label to Swift feature toggle application

We’ll add the label to the viewDidLoad() method of our ViewController, with the name GreetingLabe, and assign a greeting to it.

We could set the default value in the UILabel definition if we want, but we’ll define it explicitly throughout the tutorial to make the behavior easy to see.

We don’t need to make any further changes to the sample code. Let’s build and run the application.

Test Swift iOS application

 

We have a basic iOS app to work with. We’ll use feature toggles to modify the application’s greeting.

An iOS Feature Toggle

So let’s imagine we want to replace our greeting with a different one depending on the season.

A fundamental mechanism would be a boolean flag.

When we build and run this version we see the new greeting below:

Feature toggle in Swift iOS application

And we have our first feature toggle!

But this means modifying, building, and getting a release through the App Store whenever we want to change our greeting.

Modifying a configuration file would be a bit simpler and shave a little bit of time and effort off the release cycle.  Let’s create a property list and add our toggle in the form of a configuration property to it.

Feature flag in iOS plist

We have a property list named featureflag.plist. In it, there is one parameter: a boolean flag named isNewYear.

We’ll load it and use the property to set our flag. We’ll load the file in the controller, even though that’s not what we would do in production code, to keep things simple.

Now we can toggle the value of isNewYear to YES or NO via the properties file, and we’ll see the proper greeting.

The plist gets us out of modifying code when we want to toggle a feature, but we still need to build and release.

Let’s look at a better way to change, and more importantly, manage feature toggles.

Getting Started With Rollout

First, you’ll need to create a free Rollout account here.

Once that’s done, sign in to your Rollout account and create a new application.

Rollout new Feature Toggle iOS Application
Be sure to select iOS, as shown above.

Next, you’ll see instructions for adding Rollout to your application.

Install feature toggle api iOSFirst, download rox.zip, unzip it, and add the rox directory to your project’s directory tree.

Next, add RoxSdk.xcodeproj and ROXCore.framework to your project.

Then, select Add Files to… from the Project Navigator.

add files to projectNow, select the files from the RoxSdk directory in the zip file output.

select rollout files

 

You’ll see ROXCore.framework icon in the Project Navigator. Add those files.

Next, we need to remove the framework from the project’s linked frameworks. Select your project’s name at the top of Navigator tab.

Select feature flag project

 

 

And scroll down in the application information page to Linked Frameworks and Libraries.

Xcode linked frameworks

 

 

Next, select ROXCore.framework and then click the minus to remove it.

We want to add it back to embedded binaries. Click the plus in Embedded Binaries.

embedded binaries

Then, select ROXCore.framework and ROX.frameworkiOS and click Add.

Last, we need to set up a build step to strip out unneeded libraries from our production application.

First, add a script build phase:

Add script

Then locate the strip_frameworks.sh script:

Locate script

And drag it into the script target:

Drag in script

Initialize the API

The Xcode project is ready now, and we can initialize the API.

Finally, click Setup SDK back at the Rollout dashboard.

Setup Feature Toggle SDK

Add the import ROX declaration at the top of AppDelegate.swift.

ROllout assigned a unique initialization key to our application. Add ROX.setup() to didFinishLaunchingWithOptions. The top of AppDelegate.swift should look similar to this:

Now we can run the application to initialize Rollout. Click Activate App on the Rollout dashboard. After the application builds and runs, you are rewarded with this:

Feature flag success

Add a Managed Feature Flag

Let’s create a feature flag and register it with Rollout. If we look at our Rollout dashboard and check flags, we see this:
Create Spring Feature Toggle image
There are no flags, and there is no way to add one. That’s because we create them from our application, not from the dashboard.

In Swift, feature flags are RoxFlags, and are managed in RoxContainers.

First, add this class to the project:

We’ve created a single flag named isNewYear with a default value of true.

We register the container with the API before calling Rox.setup(). Make this change to AppDelegate:

We’re registering the container with Rollout. The first argument to register() is a name that rollout will associate with it.

Run the application and then go to the Flags menu item in the Rollout dashboard.

Feature Flag in Rollout DashboardThe new flag is there. We see the container name and the flag name in the dashboard.

Reading a Managed Feature Flag

Next, we need to update our code to use the RoxFlag instead of the property list.

Rollout retrieves feature flag values asynchronously and caches them. It extracts the latest value from the cache on startup while initiating a fetch from the network in parallel.

This means that the first time we run our application, we will immediately get the default value for our flag (false for toggles unless we override), and then the value set at Rollout upstream will be retrieved sometime later.

Since we want to see the value of the flag before we load our view, we need to do a little bit of extra work. This will also teach us a bit more about the Rollout API.

We’re going to add something to our call to ROX.setup().

First, we create a RoxOptions. This object holds configuration settings that we can pass to the API when we initialize it. One of those options, onConfigurationFetched, is a callback for when the latest configuration is successfully fetched from Rollout.

When our callback is executed, we use NotificationCenter to notify any interested objects that new information has been retrieved.

There are a few advantages to using NotificationCenter instead of trying to call our view directly:

  • Since this callback will be called on startup, our ViewController probably hasn’t been created yet.
  • If we add more controllers to the application, later things will start to get complicated.
  • Threading issues are delegated to listeners (as we’ll see).

Next, we need to make some changes in ViewController.swift.

ViewDidLoad() is now subscribing to update events and delegating them to a method named UpdateLabels()

We moved setting the value of our label to UpdateLabels(). In it, we get a reference to AppDelegate so we can access its copy of the flags and check them. However, we have to dispatch this task to the main thread because only the UI thread can modify the view.

We check the value of isNewYear and set the label appropriately. RolloutFlags() also has an enabled block, but since we need to set the label back if the flag is toggled from “on “to “off,” the if/else logic makes more sense.

 

Managing a Feature Flag

We manage flags by adding them to experiments. An experiment is a scheme for controlling flags (and other variables) in production.

First, click on Production in the left-hand side menu and then click Experiments. This will bring up a screen with a Create Experiment button. Click that and fill out the new experiment window appropriately.


Select Set Audience.

 

And we see a console for setting flags to true, false, or split. Leave it set to False.

If we run our application now, we see “Hello, World!”

Next, change it to True and restart the application.

Feature toggle in Swift iOS application

We see the Happy New Year greeting!

Flip it back to false and click the phone’s home button. Then tap on the application icon to open it again. It flips back to Hello World! 

We can change the behavior of the application without touching code and releasing a new version. We can even alter it while the application is running on a client’s device.

Before we move on, let’s take a look at the experiment on the console again. Flip the flag from true to split.

Split Settings for a Feature Toggle

This option allows us to distribute a change to a percentage of users, hence the term “experiment.” If you play with it now, you’ll need to completely start and stop the application to see different settings to overcome the API’s caching.

This is the power of feature flags!

Extra Credit: Configurations

Tying the contents of a label to a true/false flag is an admittedly contrived example, but it kept the tutorial code focused and simple.

Let’s take a look at another mechanism for setting a label. Rollout supports managing configuration values from the dashboard.

Let’s add a new field to Flags.

RoxConfigurationString is exactly what it sounds like.

Build and run the application again, and then go back to the Rollout dashboard.

Under Production and Configurations, we see our new value.

Feture toggle configuration

If we click on it, we can set it to a value like so:

set feature toggle value

 

Select >= for the criteria, and enter Happy New Year! for the value.

Then we need to make a small code change. Change updateLabels() to use the value of our new member:

And run the app. We see Happy New Year!

Let’s change it.

Set Feature Toggle to w00t

Close and open the application. No need to restart or rebuild: our notification mechanism with handle the change.

Feature toggle developers say w00t!

We can change our application config with Rollout!

Conclusion

This guide demonstrates how to get started with Rollout with iOS and Swift. The API supports any iOS platform that runs Swift, including Apple TV. Rollout’s documentation has details on how you can do a great deal more with flags, configuration settings, experiments, versions, and grouping users.

You now have an understanding of feature flag management and how it can improve your iOS code.  You’ve seen how it can help you manage projects and eliminate unnecessary releases. Get to work!

The Spring Feature Toggle: Your Guide to Getting Started Quickly

This is a post I wrote for Rollout, a tech company that has a system for managing feature toggles in enterprise applications. The original post is here.

When it comes to agile development and “moving fast and breaking stuff,” many people think of REST APIs. Decomposing services into manageable building blocks with clearly defined interfaces is a good step in designing any large system. Even if one doesn’t necessarily wish to be “agile,” REST microservices are an effective design pattern.

But wouldn’t it be nice to separate “move fast and break stuff” into two parts and throw the second one away?

Feature toggles (also known as feature flags) can help with that. They provide us with a way to add new behavior to an application and then enable or disable the new behavior without deploying a different version. Toggles give us immediate advantages when implementing a REST microservice. We can toggle new behavior for a subset of users or only enable it after deploying a new client application.

So we add a feature toggle to our configuration file or the command line, default it to “off,” and wait until later to flip it, right?

Does that describe a complete solution? Not really. Updating a configuration file isn’t always more straightforward than deploying a new release and adding a boolean flag for each new feature seems like a bit of a hack.

There is a better way. There’s feature flag management. Instead of using conditionals that will inevitably turn into technical debt, you can include these toggles as part of a strategy for improving your code and your ability to support it.

Let’s explore feature toggles in a REST service written with Spring Boot. We’ll start with a simple flag and build it into a fully managed feature flag with Rollout’s secure feature management system.

This tutorial assumes familiarity with basic Spring concepts and the ability to set up a development environment. While we will look at several Spring components, we’ll leave whether you run your application from Gradle, Maven, and IDE, etc. up to you.

The source code for the completed project is available here.

A Basic Feature Toggle

Let’s start with a straightforward Boot application.

We have an interface that supplies a greeting.

And a handler class that implements it.

We’ll create a controller with a single request mapping.

Finally, there is the main class to run the service.

We can use Curl to test this service, but any test client will do.

Above, we see our response with the standard HTTP headers.

A Spring Feature Toggle

So let’s imagine we want to replace this greeting with a different one depending on the season.

Spring already has tools for solving this problem. After all, dependency injection and programmatic configuration are two of the main reasons Spring has gained a lot of traction in the Java development community.

First, we’ll add a property to our configuration. We’ll use application.properties for the sake of simplicity and add a new parameter to the file.

Next, we create a new implementation of our GreetingHandler.

Then we’ll modify our controller to accept a handler via dependency injection.

Lastly, we add configuration code to the main class.

If holidaySeason is true, Spring overrides the primary bean, StandardGreetingHandler, with HolidayGreetingHandler.

If we run this version with the current configuration, everything looks the same.

But when we set feature.toggle.holidaySeason=true and restart the application, we see the new greeting.

We’ve added our first feature flag. The value of feature.toggle.holidaySeason controls our application’s behavior. We can load the desired controller by modifying application configuration without deploying new code.

Managing Spring Feature Toggles Centrally

Spring Properties are, at best, a limited version of feature flag management. They require modifying and distributing configuration files to toggle features. Many organizations manage their configurations like software with version control, packaging, and testing. Changing configuration requires a release cycle and it should.

Let’s look at a better way to manage feature toggles.

Getting Started With Rollout

First, you’ll need to create a free Rollout account here.

Once that’s done, sign in to your Rollout account and create a new application.

Create New App dialog on Rollout.io
Be sure to select Java and Java Server, as shown above.

Next, you’ll see instructions for adding Rollout to your application. Use the dropdowns to select Java and Java Server to see instructions like these:

Rollout Feature Toggle Installation Instructions

Your application will be assigned a unique initialization key.

Add the Rollout library to your dependencies. At the time of this writing, the current version of the rox-java-server library is 2.0.3. Here are my Gradle build dependencies for both Spring Boot and Rollout:

Initialize the API

Now we need to add the call to Rox.setup() to initialize the SDK with the application key. This requires some additional thought in a Spring application, though.

Rollout’s API is asynchronous. It uses okHttp to make REST requests for application information, including the feature toggles that we’ll add later. If we attempt to access our feature toggles before initialization has been completed, we may get an incorrect value. We need to make sure that Rollout is completely initialized and our feature toggles have been updated before Spring uses them to decide how to set up our application.

A Spring Feature Toggle Class

Let’s create a new component and use it to initialize Rollout. It will have code to wait until the Rollout initialization is complete before the application finishes loading.

Then we’ll continue to set the class up for managing flags.

This class adds two bits of functionality to our application.

Rox.setup() accepts an optional RoxOptions class. We built one that installs Rollout API’s ConfigurationFetchedHandler callback, which is called after Rollout’s configuration is retrieved.

The callback sets a CountDownLatch after the Rollout is fully initialized. We also wrapped the call to initializeRox() in a static AtomicBoolean in case the condition is instantiated in more than one place.

Second, we’re implementing the getMatchOutcome() method in SpringBootCondition. We’ve hard-coded it to return false for now. This SpringBootCondition class will be loaded when it is referenced in a @Conditional annotation. This class isn’t complete yet, but it’s good enough for this first step. We’ll point our application at it so Spring loads it and initializes the API.

Our previous revision uses the @ConditionalOnProperty to read Spring properties to decide which handler to load. We need to change to an annotation that can look at code instead of configuration. @Conditional does this. A new instance of RolloutCondition is created for each annotation that refers to it, which is why we set up the AtomicBoolean to ensure that initialization is only done once.

Selecting a Spring Controller From a Feature Toggle

Replace the @ConditionalOnProperty annotation on getGreetingHandler() and move the @Primary bean annotation to this method.

On getStandardGreetingHandler(), replace @ConditionalOnProperty with @ConditionalOnMissingbean. If the HolidayGreetingHandler is not created, we’ll fall through to the standard implementation.

Back on the Rollout website, click Next for the prompt to build and run your application.

Build and Run Dialog to initialize Spring Feature Toggle

Build and run it, and after a few moments you’ll see this message:
Feature Toggle Application Initialized for Spring Feature Toggle

Implementing a Managed Spring Feature Toggle

With the return value of getMatchOutcome() set to false, we see the standard greeting when we run our app. Now, let’s turn that return value into a managed feature flag.

If we look at our Rollout dashboard and check flags, we see this:
Create Spring Feature Toggle image

There are no flags and there is no way to create one. That’s because we create them from our application, not from the dashboard.

In Java, managed flags are RoxFlags. These flags are members of a RoxContainer, which is registered with the Rollout API.

Let’s make a few changes to RolloutCondition.

First, add a variable declaration toward the top and an implements qualification to the class definition.

Next, add a call to Rox.register() to iniitalizeRox():

Finally, change matchOutcome() to use the feature toggle.

RoxContainer is an interface with no methods. By having RolloutCondition implement it, we can pass it to Rox.register() inside initializeRox(). The call to register() accepts a String that acts as a namespace. We used Flags.

Then we added a single RoxFlag member named holidaySeason. Managed flags must be public so Rollout can see them.

Last, we modified getMatchOutcome() to return the result of holidaySeason.isEnabled(), which is a boolean value.

Run the application again and then look for your feature toggle in the Rollout dashboard.

Spring Feature Toggle Dashboard on Rollout.io

Flags.holidaySeason is there!

We’re ready to start using our flag from the management console.

Managing a Feature Flag

We manage flags by adding them to experiments. An experiment is a scheme for controlling flags (and other variables) in production.

Click on Production in the left-hand side menu and then click Experiments. This will bring up a screen with a Create Experiment button. Click that and then fill out the new experiment window appropriately.

New Experiment Dialog on Rollout.io
Select Set Audience.

Experiment Dashboard

And we see a console for setting flags to true, false, or split.

If we run our tests now, we see that holidaySeason is false.

Let’s change it to true and restart the service.

When we run our tests again, the results are different!

We can change the behavior of our application without touching code or configuration files.

Before we wrap up, let’s take a look at the experiment on the console again. Flip the flag from true to split.

Split Settings for a Feature Toggle

We don’t just have the ability to change the application behavior from the console; we can also experiment (hence the name) with how often the application loads the different greeting handler.

This is the power of feature flags.

Conclusion

This guide demonstrates how to get started with Rollout in a Java project. Rollout’s documentation has details on how you can do a great deal more with flags, experiments, and groupings.

You now have an understanding of feature flag management and how it can improve your Java code.  You’ve seen how it can help you manage projects and eliminate unnecessary deployments. Get to it!

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑