How to Redirect Docker Logs to a Single File

This post discusses how easy it is to manage your log when you use containers. You can easily use Docker to redirect your logs to a single file. How cool is that? I was in the midst of a big deployment to AWS when I wrote this one. Docker is the only way to go.

The original post is over on Scalyr’s blog.

Sometimes, when troubleshooting or monitoring a Docker container, we need to see the application’s output streams. Containerized applications generate standard output (stdout) and standard error (stderr) like any other software. The Docker daemon merges these streams and directs them to one of several locations, depending on which logging driver is installed. The default driver makes the container output easy to access, and if we want to copy the information to another location, we can redirect docker logs to a file.

Let’s take a look at how we can manipulate logs generated by the default json-file log driver. This driver places the log output in a system directory in JSON formatted files but provides a command line tool for displaying log contents in its original format.

Viewing Container Logs

Let’s start with a simple container. We’ll create one that prints a message every second.

This command is an example from the Docker website. It executes a minimal Docker container with a Bourne shell script that logs a timestamp once every second. We’ll use it for a couple of examples.

So the easiest way to view container logs is with the docker logs command. Start the container and then display the logs.

This command displays all of the output the container has generated so far.

If there’s too much output for a single screen, pipe it to a buffering command like less.

This will still send all of the output generated so far through less, so you’ll need to page through to get to the end.

But sometimes you want to follow the logs as the container is running. There’s a command line option with an appropriate name for that: run docker logs with –follow.

After printing all of the container output so far, the command will continue to follow the logs. You can use the short form of the option -f, similar to the tail command, too.

The docker logs command also offers an array of convenient options, such as filtering output based on time and displaying timestamps. You can read more about them here. But what if you want to redirect the docker logs to a file for viewing offline?

Redirect Docker Logs to File

Since Docker merges stdout and stderr for us, we can treat the log output like any other shell stream. To redirect the current logs to a file, use a redirection operator.

To send the current logs and then any updates that follow, use –follow with the redirection operator.

This command saves the output for future reference, but what if you want to keep the logs and view them at the same time? Opening another shell and tailing the output file feels like a hack. There must be a better way.

So, let’s use tee to watch the output and save it to a file at the same time.

We still see the container output. Now, open another shell and check the output file.

The file is being updated too! We have the output on the terminal, and tee saves it to the file we specified at the same time.

Redirecting Stdout/Stderr in the Container

Of course, there’s another way to save your container logs to a file. Instead of sending output to stderr and stdout, redirect your application’s output to a file and map the file to permanent storage outside of the container.

When to eschew the standard output completely is a judgment call that you should make when you take into account how much output your application generates, how often you think you’ll need to refer to it, and what kind of logging infrastructure is available to you. If, for example, you need to store events for compliance purposes, a logging framework like Java’s Logback may be a better option than capturing stdout and stderr.

That said, let’s modify the previous example to save the output to a file. We’ll change the loop to print the date to a file in /tmp. We’ll also run the container with /tmp mapped to the same directory of our host. Note that Docker will not accept relative paths on the command line, so if you want to use a different directory, you’ll need to use the complete path.

Take a look at /tmp/output.log on the host system.

Now, take a look at docker logs.

There’s nothing there. Since we redirected the output to a file, there is nothing for Docker to capture. Of course, we would need to capture both stdout and stderr in a production application.

Use the Method That Works for You

We covered several different ways to capture docker logs for saving and analysis. The best method is the one that works for you. In a development environment, the docker logs command is a powerful tool that works well with other command line tools. You can use Docker’s built-in tools to view, filter, and copy logs to a file.

In a production environment, log management should be part of an enterprise strategy. Scalyr’s log aggregation tools help you aggregate, process, search, and analyze your logs. Contact us, or sign up for a free trial to see how our tools make it easier to find what you need in the vast ocean of log entries produced by your growing application infrastructure.



Why Is Unit Testing Important?

Why is unit testing important? This is one of those titles that you almost have to click. I wrote this one for NCrunch, a testing tools company, a while back. The original is here.


What’s the point of unit testing? Why is it important? Even if you’ve never heard the question asked in so many words, you’ve probably been in a situation where your team ignored unit tests or treated them as an afterthought. They’re often the first thing to go when time runs short.

So, is unit testing important? How critical can it be if so many teams think it’s okay to ignore it? Well, the fact is unit testing is crucial to a development team’s success. Neglecting it can result in lost time, wasted effort, and missed opportunities.

We already debunked the most common excuses for not using tests and TDD. We know that we should write unit tests. Just like we know we should brush our teeth after every meal, exercise every day, and spend less time on social media. But is unit testing important?

Let’s take a look at why unit testing isn’t just important but is also critical to development success.

Unit Tests Are Documentation

For most developers, the fastest way to learn how to use new code is by reading an example. Documentation is useful, but nothing beats an example. This is especially true for a unit test, which is sample code that you can run in your IDE. What’s the first thing you scan for when you’re reading documentation? I bet it’s code snippets.

What’s the first thing you look for when you browse through the source code? Tests. What could be more useful than a collection of working unit tests? They’re examples of the code in use. They’re easy to run, and you can copy or change them for experimentation.

Is there anything more frightening than source code that doesn’t come with any tests at all? If you’re inheriting it from another developer, having no unit tests is downright terrifying. Production code and sample programs often have external dependencies and may not run in a development environment. Even if you can run them outside of production, experimenting with them can be difficult too. You can run unit tests in place, in your development environment, or on the command line, and they are easy to play with.

Unit Tests Catch Regressions

How do you know when a code fix or new feature doesn’t break existing behavior? Hopefully, your answer is unit tests. Useful unit tests establish baseline behavior for your code in isolation. By running them as part of each build, you catch unforeseen side effects before you release the code into the wild.

Detecting code regressions early in the development process is the most recognized practical application of unit tests. But the benefits of testing go beyond catching problems early; tests help you isolate problems quickly, too. By providing feedback right away, they make it easier to correlate changes to bugs.

Unit Tests Provide Immediate Feedback

How long do you want to wait before you see your code work? After you deploy it to QA? With unit tests, you don’t have to wait at all. Feedback is precisely one build away. It’s impossible to overstate the importance of the tight feedback loop unit testing offers developers.

As I mentioned above, immediate feedback simplifies troubleshooting. Rather than trying to find a new regression after the fact via log messages or a description of the problem, bugs surface immediately and can be correlated directly to the last code change. If you’re already accustomed to running frequent builds, it’s possible to isolate a new issue in minutes.

But this immediate feedback isn’t just for isolating problems. It’s also useful for confirming new behavior too. Developers aren’t lawyers, and we spend a lot of time asking questions we don’t already know the answers to. Unit tests give us the rapid responses we need. This tight feedback process saves time and improves our code.

Unit Tests Improve Code Quality

What’s the best way to figure out how to design your code? By using it.

Translating use cases and requirements into software is only the first step in the development process. It’s only after writing tests that you can use the new code to solve problems so you know you have a usable product. (Of course, inverting the process by writing the tests first is even better.) It’s not uncommon to find that an interface or object is hard to use or that you missed a set of features entirely in your first pass at developing a product. Unit tests shorten this discovery process. Designing and implementing tests makes you your own first customer. In a way, unit tests create a second type of feedback: how does this code look? Is it easy to use? Does it solve the problems it’s supposed to solve?

Depending on how you work, you may write your tests after you’ve implemented your code. After you think you are feature complete or at some other natural break point, you create tests that verify critical aspects of the code’s behavior. You might decide that something is shaped wrong, and you may change a method signature or some names to make things more transparent to a user.

Or, you might start with a test that defines what you want the code to do, and then write code that passes the test. Yes, I am referring to test-driven development (TDD.) Kent Beck says that TDD “gives you a chance to learn all of the lessons that the code has to teach you.” When you put tests up front, you write software that you understand better. By virtue of writing tests up front, you’ve already used your own software once.

Unit Tests Build Confidence

There’s a common thread to this list of reasons why unit tests are important. In each case, units tests build confidence about your code. Unit tests ask questions, and the answers provide assurances about the reliability and quality of your software throughout its lifecycle.

Code that includes unit tests comes with an implied guarantee. You can rely on the behavior they verify. If you make changes to the code and the tests still pass, you can be confident that you haven’t broken anything.

Developing tests as you write your application code builds confidence too. Whether you’re using TDD or not (and you should), unit tests are a stake in the ground for how the software behaves. The immediate feedback that your unit tests provide reassures you that you are headed in the right direction. Checking in code with working tests makes the rest of your team feel better too.

When it comes time to take your code off the shelf and update, repair, or change it, a test suite offers confidence too. You can pick up where the last developer left off, in a tight code-test-repeat development cycle. You can catch new regressions as you work and add tests to an existing set of verifications.

So, why is unit testing important? Think about what you give up without them. Without tests, you don’t get that tight feedback loop as you work. You have to wait for QA and integration testing to see your application in action. Simply put, if you don’t develop with unit tests, you can’t be completely confident that your code is ready for release.



Getting Started Quickly With Swift Logging

Scalyr’s “Getting Started” blog series has covered a ton of different platforms and languages. I’ve had the privilege of writing a handful of the posts, and each one of them has been a lot of fun. This one is about logging with Apple’s Swift language. You can find the original post here.

We’ve covered how to log in seven different languages so far: C#, Java, Python, Ruby, Go, JavaScript, and PHP. We’ve also included a few libraries and platforms, like Log4J, Node.js, Spring Boot, and Rails.

Now, it’s time to talk about Apple’s Swift language. Swift has been slowly gaining in popularity, especially with since its open source release.

I’ll start with a quick example of manual logging in Swift. Then I’ll discuss details of how and why logging matters. Finally, I’ll move on to using Apple’s Unified Logger in a Swift application and how it can improve your ability to monitor applications and track down issues

The code example will be for MacOS, but you can easily adapt it for any Apple platform.

Let’s get to work!

The Simplest Swift Logging That Could Possibly Work

Let’s start by creating a new project in Xcode. If you don’t already have it installed, go to the App Store and install it from there.

Next, start Xcode and select Create a new Xcode Project from the first dialog window.

getting started with swift loggingWe’re going to keep the code as simple as we can in this tutorial, so select MacOS and Command Line Tool and then click Next.

getting started with swift logging

Now, you can give your project a name. These values are not critical, but make sure that Swift is set as the project language.

Getting started with swift loggingFinally, we have a project.

getting started with swift loggingSelect main.swift in IDE, and you can see the code Xcode added for us.

Rather than print to the console, we want to save logging information to a text file.

Swift works differently from many of the other languages we’ve worked with before. There are several ways to write a string to a file. But since the purpose here is to demonstrate the easiest way to, here it is.

Set the file name to somewhere you can quickly locate it. Keep in mind that the working directory for Xcode projects is buried somewhere deep in your Library directory, so you may want to set the log file location to your home directory.

When you’ve picked a place for the file, build and run the project. Then find the file and open it with any editor you wish. You’ll see your log message.

We started with declaring where our log file will be. As we’ll see below, this is important and something all components have to know—unless there’s a framework that manages log locations for us.

Next, we converted that file name to a URL. All Swift strings can write themselves to a URL, so we’re taking advantage of that.

Finally, we created a string and wrote it to the URL.

While this would work for logging simply, it wouldn’t necessarily work well. What’s the cost of having every log message open a URL and write itself to it? I doubt it scales well for hundreds or thousands of log messages. What would happen when more than one message wanted to log itself at the same time? Is this mechanism threadsafe? Would it block the entire program?

Let’s look at a better way.

What Is Application Logging?

First, let’s take a look at what logging is.

You’ve probably dealt with logging before and have an idea of what it means, but it’s still worth defining it so we can evaluate a logging framework in terms of what we need. Back in the first article in this series, we defined application logging like this:

Application logging involves recording information about your application’s runtime behavior to a more persistent medium.

Let’s break that down.

Logging means recording runtime behavior. We’re documenting events about an application. The time factor implies in chronological order.

Second, we store logs in a more persistent medium. Application events occur quickly, and we want to be able to go back and review them. Maybe we’re trying to track down an error, or perhaps we have to maintain an audit trail. A persistent medium can be a disk, a relational database, or a search engine.

So, that’s what logging is. How can you do it more effectively in Swift, and why would you want to? These are important questions. Let’s get back to work,

Apple’s Unified Logging System

With MacOS Sierra and iOS 10, Apple introduced the new unified logging system. While they haven’t deprecated the legacy logging system yet, the new logger is the direction they’re going in.

The unified logger has built-in security and performance features that make it desirable, even though it does have an idiosyncrasy that many developers dislike: messages are stored in memory and a proprietary data store. There are no clear-text log files, and messages can only be read with or the command line log tool.

Let’s add the unified logger to our application and see how it works.

Let’s start by removing the code for manipulating files and just replacing it with os_log(), the unified logger. We need to import the os package, and add the call to os_log.

This program doesn’t build.

getting started with swift logging
We can only pass static strings to os_log. So, we’ll change how our string is defined.

Build and run this version, and everything looks fine.

But where’s the log message?

Run the Console application. You can find it in Applications/Utilities or by opening Spotlight and searching for it.

Search for the text Process: Swift in the search bar as shown below. (Click on the image for a larger version.)

There’s our log message.

The unified logger takes care of the log file location by storing all logs in the same system and indexing them. We don’t have to worry about where they belong, how to open them, and when to close them.

So what do we need to concern ourselves with when it comes to Swift logging?

Why Log?

Someday you’ll write that perfect application. You know which one—the one that has no bugs, reacts correctly to unexpected events, and anticipates your customers’ every need. But until then, you’ll need a way to keep an eye on things. Here in the real world, even well-constructed applications have bugs and other shortcomings, and we need to monitor what’s going on inside them.

We can find and fix problems quickly when we’re developing code. We have the debugger, we can add print statements, and yes, we even use logs as we’re coding. We’re watching our application in captivity, in a sterile environment where we have control.

But our application is going to leave the tender embrace of our development environment eventually. It will be used in ways we didn’t anticipate, and it will encounter conditions we never imagined. And then, when it fails, we’ll need a way to figure out what happened.

This is where logs come in.

Isolating bugs isn’t the only use for logs, either. They’re useful for watching an application that works and finding opportunities for improvement. We can use them to detect patterns of behavior that can be improved upon. We might be able to discover a resource that should be cached in memory instead of a file or a feature that’s never used.

And, of course, the usefulness of logs doesn’t stop there. Sometimes we need an “extra copy” of transactional data, such as credit card charges or other financial transactions.

We need visibility into our application’s runtime behavior, and we get it with logging.

How Should You Log?

We took a quick look at Apple’s Unified Logging System. It fulfills the requirement for persistently storing logs in a central location. We can find them with a text search via the console application or a command line tool.

But it would be nice if the logs were easier to find inside the system and had a bit more context.

This leads to a more general question: what information should be logged?

Most logging systems include at least the following information in each log entry:

  • Timestamp: when the event described in the log entry happened. The unified logger takes care of this for us.
  • Event context: useful information about the event. “It worked” or “It’s broken” might be useful or entertaining during a debugging session. “Failed to connect to database at” is more useful in production.
  • Severity level: logs tend to have a level that puts them in context compared to other entries. The unified logger defines default, info, debug, error, and fault. We’ll take a closer look at logging levels below.

So the unified logger takes care of two out of three essential aspects of logging for us. But it’s up to us to manage the content of our messages effectively.

These are only the basics when it comes to logging best practices. Scalyr has an article about logging best practices here. There’s also an excellent cheat sheet here.

Customizing the Unified Logging System

Adding a Log Level

So, let’s make our use of the unified logger more effective. We can already log information to persistent storage with a single line of code. What more can we do?

First, let’s add some additional context to our logs with log levels. Change the call to os_log by adding a few more arguments, and the log message text.

Build and run and take a look at the console application.

getting started with swift loggingThe logger tagged the message as an error for us because we passed .error for the type: argument. We can also pass in .info, .debug, or .fatal. if we omit the type: argument, the message is logged as default.

To do this, we had to pass the call to os_log the log message, an instance of a logger, along with the log message type.

Let’s take a closer look at the notion of a log instance before continuing to customize our messages.

Log Instances

We’d been calling os_log with a single call to log messages up until the previous exercise. This is a convenience function. When we called it with only a message, it used a default logger instance. When we called it with an instance and a log type, we were calling a different function that manipulated the instance we passed in.

Let’s take a closer look at our log message in the console application.

swift logging

Because we used the default logger instance, the message doesn’t have subsystem or category information.

If we want to populate these fields, we need our own logger instance—one that’s initialized with information about our application. Let’s create one.

The first step is to create a plist with logging configuration information.

In Xcode create a new file. Select the property list type.

swift logging new plist

Name it com.scalyr.SwiftLogger and save the file.

The file will open in the property editor. Life is too short for that. Right click on the file name in the project navigator, and select open as and then source code.

Enter this text.

This property list defines a logger system that defines how info level messages are handled. We’ll go over this after this example.

Copy this file over to the /Library area of your system

You’ll need to be in the working directory of your project and have root access.

This property list defines a logging subsystem named com.scalyr.SwiftLogger that we can refer to when we create a new instance.

So, let’s update the code.

We’re creating a logger instance with OSLog.init. We pass it the name of our subsystem and then a logging category. A category is an arbitrary name for group log messages.

Next, we pass the logger instance to os_log.

Run the code, and you’ll find a log message, set to the default level, in the console app. Click on it to see the details.

getting started with swift logging

Let’s send this message as a fault.

Run this version, and check the console application.

swift logging

We see the details and the fault log level.

Log Message Storage

In the plist, you can see this section toward the middle.

This defines the storage options for the info log level. It tells the system to inherit the default. We only needed to establish a log system to get past the step of creating a named logger.

But it’s useful to understand the default rules for storing logs.

Default log messages are stored in memory buffers. The system will compress and move them to the data store as memory buffers fill.

Info log messages are stored in memory buffers. The system won’t move them to the data store. It purges them as memory buffers fill.

Debug messages are only captured in memory when debug logging is enabled through a configuration change. The system will never write them to disk.

Unified logging always saves error and fatal messages to the data store.

Log Contents

We can set up a log with a subsystem and category, and we can control the level messages we send with it. That leaves log contents.

Even though we’re limited to passing static strings in the log message, we can still use C-style formatting with those strings.

So we can use log strings as a formatter.

Run this from Xcode, and you’ll see the log message.

We can log scalar values too.

But what we see from inside Xcode is not what happens in production.

Let’s change the code back to printing a string in the log message.

Run this in Xcode, and you’ll see “The answer is a secret” in the console application.

Now, let’s build the application and run it from the command line.

Open a shell and move to the project directory. Run xcodebuild and then run the application.

Check the log message in console.

Redacted swift logging

The unified logger redacted the string. This is the default behavior for string messages. It isn’t for scalar values.

If you do not want a string to be redacted use the {public} notation.

If you want a scalar value redacted, use the {private} notation.

This is a built-in security feature that helps protect critical information. As tempting as it might be to mark everything public, resist the urge.

What Now?

We’ve covered the hows and the whys of logging with swift on an Apple platform with the unified logger. It’s a comprehensive system that makes logging easy for an application developer.

Even though we covered a lot of ground, there’s more to know about logging. Start here, and don’t forget the cheat sheet.

The Apple unified logger documentation is here for developers.

Scalyr’s log aggregation tools help you aggregate, process, search, and visualize your logs. This makes it easier to find what you need in the vast ocean of log entries produced by your growing applications.

So now that you know the fundamentals, get started with logging in your Swift applications today!



New Logging Ebook from Scalyr!

I’ve cross-posted a few of the blogs I’ve written for Scalyr here. They offer a powerful log management tool. Plus, their blog is a valuable resource for any developer or administrator that has to deal with logs. So, for all developers and administrators.
Now you can get my blogs, along with entries from many other authors, in a free ebook! Download it here.


Should You Hold Spelling Errors Against Developers?

How do you feel about spelling errors in code? I wrote about them for SubMain on their blog.

You’re checking out some code. It’s been a long time since you looked at this project. Or maybe it’s the first time. You point your editor at an interesting file name and double-click to open it. Your coffee is warm, your mind is clear, and you’re ready to go.

And there it is, sticking out like white socks in sandals. A spelling error.

How do you handle it? Do you fix it? Do you ignore it? Discuss it with the author? Bring it up during lunch when he’s not around?

What Really Matters?

Speaking personally, this isn’t an easy question to answer. I write code for a living, and I understand what it means to be in a hurry and not have the bandwidth to worry about spelling and human grammar.

But I write words for a living, too. Spelling and grammar errors scream out to me, like a crooked painting in the Louvre.

On the other hand, if there were a Nobel for typos, I would be a contender. I usually fix them before I check in code or submit something to an editor. But not always.

How you react to spelling errors in code comes down to how you answer another, more fundamental question. What’s really important when it comes to evaluating source? Whether or not it works? Whether or not it can be understood? How correct it is?

That last question is the tip of the proverbial iceberg. What does “correct” mean? Most answers would include references to functionality, tests, maintainability, and best practices. Where does spelling fall on the correctness spectrum?

Does spelling matter?

Code Is for Coworkers

A few years ago, Derick Bailey wrote that code is for coworkers. He was referring to how we construct source code and, by comparing obfuscated code to the original source, illustrated that we write code for people. The computer doesn’t care how readable code is, but for humans, readability is critical.

The ability to look at your code and ask yourself “will anyone else be able to understand this?” is what separates a pro from a beginner—or worse, a hack. If another developer can’t understand your code, it’s not maintainable. It’s worthless.

But can misspellings render code unmaintainable?

The Case Against Spelling Errors

One problem with spelling errors is that you have to be aware of them to understand the code. If a “widget” is a “wdiget,” you have to know to search for the misspelling to find it. So, if the author has misspelled an important object or concept, the error is a frequent obstacle to understanding and fixing the code.

But misspellings are rarely that easy to navigate around. Most spelling errors are inconsistent. Widget is usually spelled correctly in 90% or more of the places where it appears. “Wdiget” only appears in one or two locations. This renders those uses invisible from searches. Even the most sophisticated refactoring tools can’t reconcile a spelling error with the correct spelling.

If spelling errors are not fixed immediately, situations like the one above will start to proliferate, rendering parts of your code unmaintainable.

The Case for Letting Spelling Errors Go

Of course, if a spelling error is making parts of your code opaque, you have more significant problems. Code that is so fragile that a handful of misspelled words impairs its maintenance must suffer from other design issues.

Why are you relying on grep or other search mechanisms to navigate through your codebase? Even if you inherited the source from another team or developer, text search is only one of many tools. Object and file names, external documentation, debuggers, and profilers are better ways to figure out how things work. A brute force file search is a last resort.

Spelling errors are a deficiency, and we should treat them as such. We should fix them and make an effort to avoid them, but there’s no reason to hold them out as a special class of problem.

But Should You Hold Spelling Errors Against Developers?

Ah yes, the original question. I thought you might let me off the hook on that one.

No, you shouldn’t hold spelling errors against developers.

What if the offender doesn’t speak native English? English spelling is atrociously inconsistent and is often an exercise in memorization. A developer’s number one focus should be on writing good code. Mistakes in a second language should be forgiven. If you’re willing to help a coworker learn how to write code, you should be willing to help with English too.

Or they were in a hurry. A spelling error is, at worst, a minor bug. Have you ever written buggy code while you were in a hurry? Me too. How about when you weren’t in a hurry? Yeah, me too again.

If you’re asking this question, it’s safe to assume that the offender is someone you work with. If you can “hold it against” them, it implies some real-life consequences. Maybe they work for you, and you can make things difficult for them. Maybe they’re a co-worker, and you’ll stop trusting them.

Lighten up.

So What Should I Do About Spelling Errors?

So here’s an idea: fix them.

What would you do if you found an off-by-one error? An uncaught exception? A memory leak?

Regardless of how you feel about how serious a spelling error is, your first reaction should be to fix it. You’re in the code for a reason, and unless the goal is some recreational reading, correct the error. The only reason to delay fixing the spelling would be that it risks a regression, such as renaming an object or method that’s dynamically linked.

If you can’t fix it on the spot, you’ve got two options. Open a bug for it, or if that might cause embarrassment for someone, go to them and work out how to address the problem.

The key concept here is this: it’s a mistake. Nothing more, nothing less. We all make them. The best way to deal with a small error is to fix it and move on. The best way to deal with a larger one is to make arrangements to have it fixed.

If you’re working in a culture where pointing out mistakes is a problem, it’s time to move on. Enough said.

Can’t We Avoid Spelling Errors in the First Place?

Most code editors and IDEs have some degree of spell check support. It’s not foolproof, and most source code is going to be full of false alarms. But you can turn it on and work on training it to learn your dialect.

And of course, there’s GhostDoc. GhostDoc can help you build excellent documentation for your code, and it catches spelling errors, too!

The Pitfalls of C++

Here’s a post I wrote about my other working langauge, C++. You can find the original over on Typemock.

There’s an old expression developers use when someone makes a mistake. They say you “shot yourself in the foot.”

There’s a variation on the joke that describes the experience of shooting yourself in the foot in various programming languages. The descriptions have evolved, and some versions are funnier than others. But the C++ gag has remained the same since I first saw it, back when I was still wrestling with Rogue Wave Tools.h++, and the STL was only an unsubstantiated rumor. Here is one example:

You accidentally create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical assistance is impossible since you can’t tell which are bitwise copies and which are just pointing at others and saying, “That’s me, over there.”

C++ lets you do just about anything.  If you can convince the compiler that you’ve written legal code, it will compile it. Of course, this means that you can, well, shoot yourself in the foot. Let’s take a look at some of the pitfalls of C++.

We’ll define a pitfall as a bug that compiles but doesn’t do what you expect. There’s quite of a few of these bugs, but we’ll cover a handful.

Overriding Arguments in Virtual Functions

Let’s start with an example of a C++ pitfall with virtual functions.

Consider two classes. One is a subclass of the other.

Next, we have a mainfunction that accesses the subclass via a pointer to the base.

Now, when we run the program, we see this:

That’s what we expect. When we access a subclass via a pointer to its base class, we expect the subclass’ version of a function to be executed.

But we can break this without even trying hard.

Now, let’s add a default argument to Bar’s implementation of doit().

Then, run the program again.

Oops! C++ gave us the implementation of doit() we deserved, but not the one we needed. It ran Foo’s version of doIt() because it has no arguments.

This is a contrived example. Most developers wouldn’t overload a method and add a new default argument at the same time.

But, what if we don’t add a default argument, but change an existing one in a subclass?

First, let’s make a few changes to our two classes.

Next, run this new version of our test program:

We got the right method, but the wrong default value.

Well, we did get the right one because the compiler is always correct, even when it’s wrong.

Default parameters are trouble, and you’re best off avoiding them. But if you do need them, remember that they’re part of the function signature and affect how the compiler picks methods.

Virtual Destructors

Smart pointers have made working with C++ easier. There’s no reason to worry about memory management anymore, right?

Not so much. Let’s add destructors to our classes.

Next, let’s allocate a Bar on the heap, use it, and then clean it up with delete.

Now, give it a spin.

Since we deleted our Bar instance via a pointer to Foo, and Foo’s constructor isn’t declared as virtual, the compiler called instead of the override. This can lead to leaked memory.

If you plan on using polymorphism, declare your destructors virtual.

So let’s make Foo’s destructor virtual and re-run the code.

That’s more like it!

Here’s a good rule of thumb: if you plan on subclassing a class, make the destructor virtual. If you don’t, make it protected, so if someone tries to create a subclass later, the compiler will refuse to build the code.

Also, don’t create a subclass if you’re not sure that the base class has a virtual constructor. If in doubt, use composition instead of inheritance.

Deleting Arrays

We need an array of Bars.

If you’ve been coding with C++ for a while, you might see the error right away. We should delete arrays with delete[], not delete.

This code compiles. If you run a debug build, it may stop with an exception, depending on your platform. A release build may run normally, or it may exit with an error.

Here’s what I got with CLion running in Windows:

Destroying a Bar
Destroying a Foo

Process finished with exit code -1073740940 (0xC0000374)

So, it exited with an error. This bug might not make it past unit tests or integration tests.

We hope.

How do you avoid this? Easy. Use a vector. Problem solved. C++’s primitive arrays are an accident waiting to happen since they act like raw pointers.

Class Members in Initialization Lists

Initialization lists are the preferred way to set up a new class instance’s state.

Here’s an example:

Let’s try this class out with this code in main.

Our output looks like this:

The compiler didn’t initialize the _length member correctly.

Class members are initialized in the order they are declared, not the order specified in your initialization list. Since it’s defined first, _length was initialized with the value in _capacity. But _capacity wasn’t initialized yet.

Don’t refer to class members in initialization lists, no matter how neat and concise it looks.

This is another mistake that your IDE and your static analysis tools should warn you about. But the code will still compile. It might even work sometimes.

Calling Virtual Functions in Constructors

Let’s finish up with a constructor pitfall.

First, simplify Foo’s constructor.

Next, edit Bar so it only overrides the status() method. We don’t need a new constructor.

What happens when we create a Bar?

When status() is called, our type is still Foo. So, its version of the virtual function is called.

Don’t call virtual functions in constructors.

Ignoring Your Tools

We have one more C++ pitfall to look at before we’re done.

Two of our pitfalls required ignoring the signs before we fell into the hole. When we deleted an array with the wrong operator and tried to initialize a member with another uninitialized member, both Visual Studio and CLion warned us. (I’m assuming Eclipse would have too.)

Pay attention to your tools. Run static analysis. Turn on your compiler’s warnings and tell it to treat them as errors. Your feet will thank you.

Watch Your Step

It’s possible to code in C++ without steel-toed shoes and a doctor on standby. Both the language and the tools have come a long way in the past decade, and it’s possible to get C++’s superior performance and write clean code that’s stable and easy to maintain at the same time.

TypeMock’s Isolator++ is one of those next-generation tools. You can use it to quickly put together effective tests for your legacy and your new code. Download a trial for Linux or Windows today. Your toes will thank you.

How to Merge Log Files

I mentioned how much I enjoy working with Python last week. Here’s another post where I used it to solve a company sysadmin problem.The original post is over here.

You have log files from two or more applications, and you need to see them together. Viewing the data together in proper sequence will make it easier to correlate events, and listing them side-by-side in windows or tabs isn’t cutting it.

You need to merge log files by timestamps.

But just merging them by timestamp isn’t the only thing you need. Many log files have entries with more than one line, and not all of those lines have timestamps on them.

Merge Log Files by Timestamp

Let’s take a look at the simple case. We have two files from Linux’s syslog daemon. One is the messages file and the other is the crontab log.

Here are four lines from the messages file:

And here are five lines from cron:

When we’re only dealing with ten lines of logs, it’s easy to see where the merge belongs. The five lines in the cron log belong between the first and second lines of the messages log.

But with a bigger dataset, we need a tool that can merge these two files on the date and the time. The good news is that Linux has a tool for this already.

Merge Log Files With Sort

The sort command can, as its name implies, sort input. We can stream both log files into sort and give it a hint on how to sort the two logs.

Let’s give it a try.

This creates a new file named merge.log. Here’s what it looks like:

It worked!

Let’s dissect that command.

cat messages.log cron.log |

Cat concatenates files. We used it to send both logs to standard output. In this case, it sent messages.log first and then cron.log.

The pipe | is what it sounds like. It’s a pipe between two programs. It sends the contents of the two files to the next part of the command. As we’ll see below, sort can accept a single filename on the command line. When we want to sort more than one file, we use a pipe to send the files on standard input.

sort –key=2,3 > merge.log

Sort receives the contents of two files and sorts them. Its output goes to the > redirect operator, which creates the new file.

The most important part of this command is –key=2,3. We used this to tell sort to sort its input using two fields and three of the files. For some reason, sort starts counting fields at one instead of zero.

So sort was able to merge the two files using the day of the month and the timestamp.

This is our easy case. These log files both had single line entries, and our dataset was for less than thirty days. So we don’t have to worry about sorting by months.

Let’s look at something that’s a little more complicated.

Merge Log Files With Multiline Entries

Here are a couple of Java application logs that we would like to merge.

Here’s the first:

This log has a lot of whitespace and entries that span multiple lines.

Here’s the other:

This log is more uniform, with entries that only span a single line.

When we merge these two files, we want the multiline log message to remain together. So, sort’s numeric sorting won’t work. We need a tool that’s capable of associating the lines without timestamps with the last line that has one.

Unfortunately, no command line tool does this. We’re going to have to write some code.

A Merging Algorithm

Here’s an algorithm for merging log files that have multiline entries.

First, we need to preprocess the log files.

  1. Scan the log file line by line until we reach the end.
  2. If a line has a timestamp, save it and print the last saved line to a new file.
  3. If a line has no timestamp, append it to the saved line, after replacing the new line with a special character
  4. Continue with step #1.

We could do this in memory, but what happens when we’re dealing with huge log files? We’ll save the preprocessed log entries to disk so that this tool will work on huge log files.

After we perform this on both files, we have a new one that is full of single line entries. We’ll use the sort command to sort it for us, rather than reinventing the wheel. Then, we’ll replace the special characters with new lines, and we have a merged log file.

And we’re done!

Let’s do it.

Merge Log Files With Python

We’ll use python. It’s available on all systems, and it’s easy to write a cross-platform tool that manipulates text files. I wrote the code for this article with version 2.7.14. You can find the entire script here on Github.

First, we need to process our input files.

We’ll start by processing command line arguments. This script accepts two:

  • -f is a comma-separated list of input files
  • -o is the name of the file to write the output to

Argparse gives us a list from the arguments passed to -f and opens the output file for us, as we’ll see below.

Python Regular Expressions

Then we’ll create a regular expression. Let’s take a close look at it since this is what you’ll need to change if your logs are formatted differently.

Here’s the whole expression:


The expression starts with a caret ^. This means the beginning of a line.

But then we have this: [^ ] with some characters in the middle. Square brackets with a caret at the beginning mean not.

So the expression means “if this is not at the beginning of the line.”

The pattern we’re matching is inside the brackets.


Each 0–9 corresponds to a numeral. Each \- is a dash. So it could be read like this: NNNN-NN-NN. It’s a pattern for the date we see at the beginning of each log entry.

So in English, the expression means “if the line does not begin with a date.”

If you need to process logs with a different format, you’ll need to change this. There’s a guide to python regular expressions here.

Sorting the Results

Now, we’ll start the real work.

  1. Open a temporary file.
  2. Open the first log file.
  3. Join lines with no timestamp to their predecessors, as described above.
  4. Repeat this for each file passed on the command line.

For the third step, we’ll chop the newline ‘\n’ from the end of the last line we saved. Then we’ll add an SOH (‘\1’) character and concatenate the lines. (I could’ve done this in one line, but I spelled it out to make it clear.)

We’re replacing newlines ‘\n’ with the SOH character instead of NULLs (‘\0’) because nulls would confuse python’s string processing libraries and we’d lose data.

Finally, the result of this code is a file named tmp.log that contains the log files preprocessed to be one line per entry.

Let’s finish the job.

Check_output executes an external command and captures the output.

So we’ll use it to run sort on our temporary file and return the results to us as a string. Then, we’ll remove the temporary file.

We wouldn’t want to capture the result in memory with a large file, but to keep this post short, I cheated. An alternative is to send the output of sort to a file with the -o option and then open that file and remove the special characters.

Next, we’ll split the output on the new lines into an array. Then we’ll process that array and undo the special characters. We’ll write each line to the file opened for us by argparse.

We’re done!

Let’s run this script on two files:

./ -f foo.log bar.log -o output.log

And we’ll see this.

Log Files, Merged

In this tutorial, we covered how to merge log files, looking at a straightforward case and then a more complicated situation. The code for this is available on Github, and you’re free to download and modify it for your individual needs.

Getting Started with Python and Feature Flags

Python is a fun language, and I’ve been trying to use it every chance I have. This post was one of those chance. I wrote on guide on using Rollout’s feature flags with Python. You can find the original post here.

You’re doing it right. You’ve set up an array of microservices that offer REST APIs to support an enterprise application. Your architecture is comprised of loosely coupled services that have enabled your team to roll out updates quickly in response to new feature requests and bugs. But when it comes time to make a change, you still find yourself struggling with dependencies. While it’s possible to version your APIs, you want something more granular. A new API still requires a release of both a new service and a new client application.

Wouldn’t it be nice to coordinate a change between client and microservice without rolling out new code?

It is possible. Feature toggles (also known as feature flags) give you that ability and more. They provide a way to control new behavior in an application by enabling or disabling it without deploying a different version. You can install these toggles in services and clients. For example with REST services, you can toggle new behavior for a subset of users or only enable it after deploying a new client application.

So we add a new parameter to our configuration file or the command line, default it to “off,” and flip it “on” when the new client is deployed, right?

No. That’s not a sustainable solution. It’s a hack. Updating a configuration file isn’t necessarily easier than deploying a new configuration, nor should it be. In many systems, a configuration change is a new release. And what happens when the migration to the new feature is done? Does the feature stay in the configuration file? How big will your configuration file get after a few years? Is it possible it gets turned on by mistake someday?

There’s a better way. There’s feature flag management. Instead of using configuration variables that evolve into technical debt, you manage feature toggles from a feature flag management platform. You can integrate them into a strategy for improving your code, enhancing your ability to support your application, and making your customers happier. With a central management platform like Rollout, you can group flags and coordinate a new feature between client and service, based on a variety of factors like application version and client platform.

We’re going to add a feature toggle to a REST service written in Python. We’ll start with a simple flag that’s managed in configuration and then migrate it to Rollout as part of a secure feature flag management system.

This tutorial assumes familiarity with basic Python and a little bit of Flask. You’ll need to have Python 2.x installed on your system, along with PIP.

Setup and a Basic Web Server

First, you’ll set up a Python virtual environment to run the microservice.

If you don’t have virtualenv installed yet, you can install it with PIP.

Next, create a directory and run virtualenv inside it. Then use the virtual environment’s Python to install Flask.

Now you have a virtual environment installed with Flask. We’ll be doing all of our development here. You can use any editor or IDE you wish for the following examples. I’ll just show code snippets and the output of most commands.

First, let’s create a simple web server, so we’re sure everything is in working order.

So, create a file name and add this code:

We created a Flask application and set up a route for the root path. When a browser requests the root document, the server will return a greeting in plain text.

Next, let’s make it executable, so we can run the script.

Finally, it’s time to run it.

python feature flag guide

Now point your web browser at localhost and port 5000.

pything feature flag guide hello rollout

We have a simple web server to work with. Next, we’ll add a REST endpoint and a feature flag.

A Python Feature Toggle

Now, let’s turn this into a REST service and add a simple feature toggle.

First, we’ll define a document that represents a normal day. Our service will return this document in response to a REST request. Next, change the one and only route to return that document when it receives a GET request with no parameters. Finally, edit the URL path to look more like a REST application.

So, we’ve established calendar/api/v1.0/whatistoday as our endpoint. Let’s hit it from the browser.


python feature flag guide
Then point your browser at the new URL so we can see what our REST response looks like.

python feature flag guideWe have a basic REST service! Now, let’s add a feature toggle.

Not every day is just another day. We need a feature for holidays that we can flip when we want to return a different response to our GET request.

First, you’ll need to create a configuration file.

Create a file name with a single section named calendar and a single property named holiday. Start with it set to true.

Next, add the code to read it to

Add ConfigParser to the imports.

And use RawConfigParser to process the file.

So, we have a boolean that indicates whether or not it is a holiday.

Now let’s use modify our route, so it uses the configuration property to decide what document to return.

We have two different days defined, and the rest service will return one of the other depending on the configuration setting.

So, start and refresh your browser.

python feature flag guideSince holiday is set to True, we see the holiday JSON object.

Next, stop and edit the configuration file to set holiday to False.

Start the service, and refresh the browser again.

python feature flag guide

Since we flipped value for holiday to false, we see the message for a normal day again.

We’ve added our first feature flag. The value of holiday in the properties file controls our application’s behavior. We can return the desired JSON object by modifying the configuration and without deploying new code. But, we still need to manage the configuration file and restart the application after we change it.

Python Feature Flag Management

A properties file is not a feature flag management system. It requires modifying a configuration file in production, which is an accident waiting to happen. If we want to avoid editing live configurations, we have to version and distribute configuration files like code so we can track changes.

So let’s look at a better way to manage Python feature toggles with Rollout.

Getting Started With Rollout

First, you’ll need to create a free Rollout account here.

Once that’s done, sign in to your account and create a new application.

python feature flag guide
Be sure to select Python, as shown above.

Next, you’ll see instructions for adding Rollout to your application. Leave this dialog box open while you set up your environment so that you can return to it. You’ll want to use it later to finish setting up your application.

Now you need to add Rox to the virtual environment you’ve been using. Execute rollout/pip install rox from the command line.

python feature flag guide install rox

Pip will generate a lot of output.

Initialize the API

Next, we need to initialize the SDK with the application key shown above. First, add the import statement import rox.server.rox_server to the top of the file. Then add the call to Rox.setup() with your application key.

We’re going to run this once to initialize Rollout, so the rest of the application can stay the same for now.

Click Next on the installation dialog.

python feature flag build and run
Rollout is waiting for the application to initialize. Run

python feature flag guide success

The application will bring up the web server, but we’re only interested in initializing Rox. When you see the that the server is ready for requests, check and make sure you see the Success message above.

Creating a Python Feature Toggle

Click Go To App on the success dialog in the Rollout dashboard if you haven’t already. On the left-hand side of the page, there is a Flags menu item. It’s not enabled yet, because you haven’t created any flags.

So you need to add a feature toggle to your application. This is a two-step process. First,  define the toggle in your application code. Then run the application, and the new flag is registered in the Rollout dashboard.

We’re going to add the feature toggle code in the main source file to keep things simple. In a production application, you’d break things down into different files for each class.

Feature flags are kept in RoxContainers. Let’s add one to the application.

First, import rox_flag into the application.

Then, create a container class with a single member. Near the top of the file.

This class contains a single member named isHoliday. A RoxFlag holds a feature toggle. You’ll see how to use it in a moment.

Create an instance of the container, and then once the container is created, register it with the Rox API. So right before the call to Rox.setup(), call register with the container. Register accepts a name with it. We’ll call this one Flags.

Now, run the application and let it finish initializing. Just as before, stop it when it starts waiting for requests.

Return to the Rollout dashboard, and click on the Flags menu.

python feature flag guide

The flag has been registered with Rollout. Since we registered the flag in a container named Flags, its name in the dashboard is Flags.isHoliday. Naming containers provides a mechanism for organizing flags (and other variables managed by Rox) in a large application.

Using a Python Feature Toggle

Now we need to wire the toggle up so we can use it to control our application’s behavior.

A RoxFlag has an is_enabled() method that we call to see if the feature is toggled true or false. So, let’s integrate the flag into our code.

Here’s the entire application after adding the code to use the toggle:

Inside the application route, we replaced the boolean we loaded from the configuration file with isHoliday.is_enabled().

Creating an Experiment

We manage flags by adding them to experiments. An experiment is a scheme for controlling flags (and other variables) in production.

Click on Production in the left-hand side menu and then click Experiments. This will bring up a screen with a Create Experiment button. Click that and then fill out the new experiment window with a name.

Python Feature Flag Guide

And we see a console for setting flags to true, false, or split. Several criteria can set the flag’s value. For this tutorial, we’re going to stick with setting the flag as true or false for all users.

The flag defaults to false or disabled. Let’s run the application and see it in action.


python feature flag

Next, refresh your web browser.

python feature flag

We see the record for a normal day.

Head back to the experiment and set the flag to true for all users.

Next, restart the Python script.

Then refresh your browser again.

python feature flag

It worked! We can manage our feature toggle from the dashboard.

Refreshing the Flag

The REST service only retrieves isHoliday’s value on startup. If we want to toggle the flag, we still need to restart our application. We’d rather be able to toggle the flag in real-time. Let’s fix that.

We can refresh Rox API toggles with Rox.fetch(). We can call this anytime in our application to get the latest state of our flags. But, this is a Flask application, and Flask wants to control the main thread. We need to set refreshing the feature toggle values up as a background task.

It would also be nice to know if the flags have changed each time we run fetch. So, let’s make two last code updates.

First, let’s pass some configuration options to Rox.setup().

We configure the Rox API’s behavior with the RoxOptions class. We’re going to install a configuration fetched handler. This is a callback that Rox will call when the latest state of our flags is fetched.

We created an instance of the class and passed in a lambda to run when the configuration is fetched. We’ll print a message stating that the fetch completed and whether or not there were any changes.

If you run the application now, you’ll see that is called once, on application startup.

Next, we need to install a function that will periodically run and fetch the flag state.

Since Flask wants to control the application threads, installing a thread before it starts or trying to run in another thread before transfer control to it can cause problems. Our best option is to create a Flask job.

Add this code to the application before you execute 

This code installs a thread that will run Rox.fetch() every three seconds. This thread won’t interfere with Flask’s request handler.

Now, start the application again, and make a few requests. Then head back to the Rollout dashboard, change the flag, and make more requests. You’ll see that the service returns a different result depending on how you set Flags.isHoliday.

python feature flag

The REST service is picking up the requests and changing its behavior. You can toggle your feature flag in real-time.

Start Managing Your Feature Flags Today

First, we built a small REST service using Python and Flask. Then ee added a basic feature flag and then upgraded the application to a managed feature toggle with Rollout. We finished by integrating reading toggles with Flask’s main event loop.

This is only the tip of the iceberg! Rollout can manage configuration flags that hold more than an enabled or disabled state. Experiments can distribute flag values over different users based on values such as application version and device.

Sign up for a Rollout trial today and take control of your feature flags and application strategy.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑