The Biggest Problems of Unit Testing With C++

Maybe you’re new to C++. Perhaps you’ve been working with the language for a while and are looking at unit testing for the first time. Either way, unit testing with C++ often seems to have more problems than it should.

There are significant differences between C++ and “modern” languages like Java, C#, and Python. These newer languages, the ones most often associated with unit testing, have different tools. Even their editors, debuggers, and test frameworks are designed differently. But, there’s no reason not to be writing tests for your C++ code.

Let’s discuss some of the obstacles to writing tests for C++. You can conquer these problems and make your code more reliable.

C++ Code Is Complex

C++ code can be complex, and this complexity is precisely why the language attracts some developers. Used effectively, C++ is a fast, flexible, and powerful platform for high-performance applications. Used poorly, it’s a tool for creating untestable legacy code that everyone is afraid to touch lest they lose their weekend.

Compile Vs. Run Time Type Checking

Even though C++ is a statically-typed language, its type system is weak. Type checking is done by the compiler, unlike Java and C#. If you can fool the compiler, you can get away with anything.

With reinterpret_cast we can coerce any pointer type to another with few limitations. With C-style casts, you can make things even worse with an operator that’s harder to spot. So, if you want to write a test for codes that are using these operators, you have to figure out what the original author was trying to do.

The C preprocessor generates code with no type checking at all. Some developers use it to create a private meta language that’s not only not typesafe, but unreadable. Macros can act as an overlaid scripting language that obfuscates the C++ code.

Some consider templates the modern-day replacement for the legacy preprocessor. They’re not wrong, since replacing text macros with compiled code is always a better alternative. But templates replace the preprocessor’s lack of type checking with “duck typing.” This mechanism can still hide your intentions from the compiler and make an end run around type checking.

Java and C# have runtime type checking and provide generics instead of templates. These features can make the code easier to read and retrofit for tests. They don’t mean that all Java and C# applications are easier to troubleshoot or test. For example, a poorly-written (or missing) hashing algorithm can cause many problems with a generic container.

Memory Management

And then there is C++’s manual memory management. C++ 11 removed much of the burden of managing memory with the official adoption of the smart pointer. Boost has been providing similar tools for even longer. But, many legacy developers didn’t get the memo. Moreover, while smart pointers are powerful, they’re not foolproof.

Many “modern” languages lack these features by design. Some developers see this as a strength. Other see it as a weakness.

Java has references instead of pointers, and the C# pointer type has more restrictions than C++’s. Both languages replace manual memory management with garbage collection.

But it’s possible to leak a reference, and even though references are not pointers, Java still has a NullPointerException. Go figure.

Any code can be complicated, and this is hardly a reason to eschew unit testing. A developer can still abuse Java exceptions to the point of being a substitute for goto. An eager functional programmer can make streams inscrutable.

So, while the creators of Java and C# designed them to reduce complexity, it’s still possible to write untestable code. Don’t blame the language.

Write tests now, and use them to increase your knowledge of that complicated legacy system. The tests will help you unravel the design or create a replacement.

Slow Builds

The C++ build process is more involved and more time-consuming than for other languages. The C++ build process has two steps: compiling and linking. That difference alone is significant. But, the time to compile code that makes heavy use of templates and can add even more time.

Add to this how many teams structure their large projects. They create a single target, and that is the product of many other builds. Finally, after those targets finish their builds, the system generates a set of executables. One of them is the test binary.

Toss in build targets for a few different platforms, and a complete build cycle can take many minutes, or even hours. This more involved compiling and linking process slows the code/build/test cycle to a crawl.

So, if tests are difficult to run, developers will run them infrequently. They may even ignore them. Eventually, they’re forgotten.

The solution to this problem isn’t easy, but it’s better than not running tests at all. Break the application down into independent components. Create dynamically-linked libraries and built and test them in isolation.

This is a lot of work, and it may feel like that time could be better spent writing new code. But working with a slow build process that everybody hates is an obstacle. It hampers many aspects of the development process, especially tests.

Mocking Libraries

C++’s differing architecture has more disadvantages than the potential to slow the development process to a crawl.

Interpreted languages, including those that run in virtual machines, are easier to mock and fake. Mocking libraries for these languages have access to private class members. They can also mock concrete classes and functions.

Many C++ mocking libraries lack this ability and are limited to mocking only public and virtual class methods. Instead of mocking concrete methods or classes, they expect you to design your code for testing. Legacy code is often impossible to test without either changing it or writing new code just for tests.

But this isn’t a universal problem with C++ mocking frameworks. Isolator++ addresses these limitations and offers the same level of functionality for its C++ version as it does for it’s .NET edition.

Many Resources Are for Other Languages

A significant problem unit testing C++ is that most of the online resources for writing tests are geared toward other languages. Go to your favorite search engine and enter a query for a topic about testing, and most of the articles you find are for Java, Ruby, or C#.

While unit testing has been around for a long time, the modern implementation we’re most familiar with came to prominence with Extreme Programming (XP) and its cousin, Test-driven development (TDD). We usually associate these movements with Java and Ruby, even though we can apply the core concepts to other languages.

But the core concepts behind unit testing apply to any language. If you’re up to writing clean C++ code, you can use the advice in an article about testing with .NET to your code too.

Testing objects and functions in isolation is a universal concept. Writing tests with Arrange, Act, and Assert, is possible with any language, not only Java and C#. There’s no reason to let the dearth of C++ unit testing resources stop you from testing your code.

No More Excuses

C++ is a powerful language, but as the man in the funny red and blue suit says, with great power comes great responsibility. There is no excuse for writing or maintaining code without tests.

Typemock’s Isolator++ is designed to make adding tests to your C++ code easy, regardless of whether it’s new code that you’ve written today or legacy code from years ago. It can isolate any class for testing, regardless of how its dependencies are designed.

Download an evaluation copy today, and leave your biggest problems with unit testing C++ behind.

 

unsplash-logoPhoto Credit: Paweł Czerwiński

Getting Started with the Rails Logger

Here’s another post I wrote for Scalyr last summer. I enjoy working with Ruby, and putting together this tutorial was a lot of fun. I hope you enjoy it too!


Let’s continue our ongoing series on getting starting with loggers for different languages and platforms. Back in March, we covered logging with Ruby; now it’s time to take a look at the platform most often associated with that language, Rails.

We’ll start with a simple application with scaffolding for CRUD operations on a single record. We’ll look at Rails’ default logging configuration and how to use logging in an application. Then we’ll look at how logging can be improved and why you might want to improve it.

This tutorial uses Ruby v2.5.1 and Rails 5.2.0. You’ll need to have them installed to follow along. These instructions will use the command line to create and configure the application and will not rely on a specific IDE or editor. We’ll let Rails use SQLite for the backend database.

Create a Rails Application

We’ll start off by creating a Rails application.  Run this command first.

After few screens of output, Rails has created the application.

Next, we’ll let Rails create a model and a controller for us so we have something interesting to log. Move into the logging_app directory and run this.

Rails creates the scaffolding for a user record with a name and an email address.

Finally, have Rails migrate the application into SQLite.

And we have an application! Let’s run it and confirm that’s all set up.

Point a browser at http://localhost:3000/users, and we see our main page for managing users.

 

Rails logger user pageWe’re ready to get started with logging!

Rails Logger

Rails creates an instance of ActiveSupport::Logger as part of initialization and uses it for logging to standard output.

Take a look at the application output when we open the /users page:

Logger produces this outputIn addition to standard output, we can find the same input in the default log file. Let’s stop the server and take a look at the last ten lines of the log file.

The default name of the log file is the environment name. In development, it’s named development.log; in production, it’s named production.log; and so on.

The default logger is available to application code in a globally accessible instance named logger. Let’s add an extra debug message when the application displays a user.

First, add a log message to app/user_controller.rb in the show method.

Then start the application. Go to the /users and add a user named John Doe with an email address of john@doe.com.

Your log should have an entry similar to this:

Rails has a reputation for making life easy for developers, and logging is an example of why. Let’s look at how we can customize Rails Logger.

What Is Application Logging?

Before we delve into how to improve the Rails logger, let’s go back to first principles. What are we trying to do? What is application logging? We’ve got a good definition in an earlier post in this series.

Application logging involves recording information about your application’s runtime behavior to a more persistent medium.

We’ve already covered that in Rails. We have a way to record information about our application to a file and to standard output, which our DevOps can redirect to where they wish with standard output. But we can do more.

What Is Application Logging for?

Why do we write logs? It seems like an obvious question, but it’s worth considering.

We write logs so we have a record of what’s happening.

We’ve released our code to production, and we need to get to work on something else. But at some point, something will go wrong, and we’ll be asked to diagnose the problem. Logs are often the only thing we have.

We may also be asked for information about what’s happening in production when nothing is wrong. How many pages are loaded per day? Per hour? Per minute? Where are users coming from? What features are used most often, and what features are ignored? While some of this information is available from web analytics, logs can drill down to a higher level of detail.

Application logging is a critical tool for helping us locate and fix errors and for collecting information that can help us improve our applications.

What Information Should We Log?

The messages we’ve seen from Rails are messages about what is happening in the application. They lack context, though. Let’s define what we want from a log message and then configure the system to give us that.

A log entry represents an event. Something happened to the application, and the developer thought it was interesting enough to save for later review.

The most obvious thing we might want to know about an event is when it happened. None of the log entries we’ve seen so far have a timestamp on them. We’ll need to add that.

It’s also useful to have some context. We added this log message to our code earlier.

Rails logged this:

What happened to debug? We called the logger at debug level, but that information was not passed on.  All of the log entries appear as equivalent with the default configuration, reducing their usefulness.

Let’s get started at changing the Rails logger’s configuration.

Configuring Logs

Like most loggers, ActiveSupport::Logger supports different levels. Those levels are debug, info, warn, error, and fatal.

We can log a message at any one of these levels by calling the associated method:

This gives us a way to categorize messages, and we’ll add the level to the messages below. But let’s look at filtering messages first.

Debug messages are useful for, well, debugging. Their very name implies they’re for development or for isolating a problem. We don’t typically want to see them in production where they can reduce the signal-to-noise ratio in log files and potentially compromise performance.

We can filter which messages make it to the console or log file in Rails configuration.

If we add this to our configuration, only messages logged at error or fatal will be logged.

Rails has a hierarchal configurations scheme. We can add logging configuration in config/application.rb or config/environment/{environment_name}.rb. The environment-specific file takes precedence.

So when we add this to config/application.rb:

And this to config/environment/development.rb:

We still see debug messages in development, but only error and fatal in production.

We can change the name and location of log files by supplying Rail with logger instance initialized with our preferences.

This also means we can create new subclasses of Logger if we want to change logging behavior.

But let’s change the behavior we’re most interest in right now: log message formatting.

Log Message Format

Rails configuration has a mechanism for supplying Rails with a log message formatter instance. The first step is creating that message formatter class.

We extended the ActiveSupport::Logger::SimpleFormatter and overloaded its call method. We need an instance of SimpleFormatter for the configuration.

Logger passes call four parameters. They are the log level (as severity), the time of the message, program name, and the message. We formatted the severity into a fixed width field and the time as a human-readable string. Then we put the parts together into a familiar message format.

There are a variety of ways we can add this class to our code. The use of the libs directory can be controversial with Rails, but we’ll use it for the sake of getting a formatter installed.

First, create the formatter class in logging_app/libs as message_formatter.rb.

Next, since Rails 5.x does not load classes from lib automatically, add these lines to config/application.rb in the LoggingApp class declaration.

Finally, add the formatter to config/environment/development.rb

When we run the server and open a user entry, we see log messages with our custom format.

We’ve added the logging severity and the timestamp to the message. The program name is not always used, but it will appear if specified in a call.

Conclusion

We looked at the Rails logger and how to call it from our application code. Then we looked at how to customize the configuration and extend the log messages to make them more useful.

There’s a great deal more you can do with logging, and you can learn it here on the Scalyr blog. We can help you with what to do, what not to do, and a great deal more.

So stick around for more, and keep learning!

 

Photo by Alberto Lugli on Unsplash

Unit Testing With C++: The How and the Why

Unit testing is a subject that’s near and dear to my heart. Here’s a post I wrote for TypeMock about that very thing. The original is over here.

Why isn’t unit testing with C++ as common as it is with Java, C#, and Python?

I don’t know. I was hoping you did.

If you’re developing with C++, you should already be writing tests. There’s no reason not to, and many reasons why you need to start today.

What Is Unit Testing?

Before I discuss the why and how of unit testing with C++, let’s define what we’re talking about.

Unit testing means writing code that verifies individual parts, or units, of an application or library. A unit is the smallest testable part of an application. Unit tests assess code in isolation.

In C++ this means writing tests for methods or functions. Tests only examine code within a single object. They don’t rely on external resources such as databases, web servers, or message brokers.

For the sake of this discussion, I’ll assume that you execute your unit tests as part of a build or, even better, a continuous integration system.

Why Unit Test With C++?

Find Stupid Bugs Early

We all write stupid bugs. We create off-by-one errors, define incorrect constants, mess up an algorithm, or write one of those “what the heck was I thinking?” mistakes every day. We’re only human. Even us C++ developers, despite what those C# folks whisper in the lunchroom.

Without unit tests, we don’t catch these errors until we get to integration testing or worse, QA. But it’s just as likely that we won’t find them until later in production.

So, unit tests act as a safety net. By taking the time to write checks for each method or function as we write them, we catch stupid bugs when we create them.

Avoid Regressions

Not all bugs are stupid. Some bugs are quite intelligent. We call them regressions. If you’ve been around for more than a few years, you’ve seen one of these.

Your system has been working in production for a long time, but you need to add a new feature or address a deficiency. So you modify the code and roll out a new version, and something else breaks. If you’re lucky, it’s easy to figure out. If not, cancel your weekend plans.

Your first line of defense against regressions is unit tests. Creating targeted routines that verify discrete behavior, and then running them on every build helps ensure that a new feature doesn’t come with a new bug.

Get Early Feedback

Unit testing with C++ gives you early feedback. Sometimes the feedback is that you haven’t caused a new regression. Other times it’s whether your new code is doing what you think.

Early feedback means confidence. Without unit tests, you don’t know if what you’re doing works. Writing code that compiles the first time feels great. Writing code that does what you want right away feels even better.

Better Design

Writing unit tests for code means writing code that can be broken down into discrete units. Testable code is modular because discretely tested units are loosely coupled.

If this reason for unit testing, combined with the previous one, reads like an advertisement for test-driven development (TDD), that’s because they are. TDD works and it’s a reason to start using unit testing with C++. Agile systems are associated with languages like Java, Ruby, and C#. Is that because of the languages or the practices associated with them?

No. There’s no reason you can’t implement a rapidly evolving architecture with C++.

Create Built-in Documentation

Raise your hand if you trust comments.

Yeah, I didn’t think so. Even your comments can be unintelligible or outright wrong after a few months.

But have you ever peeked at unit tests to see how something works? Do you breathe a sigh of relief when you check out code and find tests? Tests don’t lie. Sometimes the best way to figure out how the code works is to see it in action.

How to Unit Test With C++

Okay, so you’re sold on unit testing now, right? Then let’s get to it.

Pick a Test Runner

Unit testing with C++ requires a test runner. We write tests as functions, and then we link the functions into a binary that the build executes as a test target. So, we need a main function that knows how to run the tests, check the results, and print a report.

There are too many test runners and frameworks for unit testing C++ to list here. I’ll list a few of the more common ones.

Visual Studio comes with the Microsoft Unit Testing Framework for C++. You can create a project with this test runner built in and add test classes with a few clicks.

Google Test is the most well-known cross-platform test runner for C++. It’s distributed as source code. So, you have to build it for your environment or include it as a vendor dependency with CMake. It comes with a mocking library and, unlike Microsoft’s runner, is open-source software.

Other C++ test runners are CppUnit and CxxTest.

Pick a Mocking Framework

Now you can run tests. So, you need to write them. The key to writing effective tests is to pick a discrete unit and then verify its functionality in isolation. If a test fails and it’s not obvious what caused the failure, the unit under test is not isolated.

Depending on what we are testing, we may need to use a mock. Mocking is creating an object that mimics the behavior of another object. For example, if we are testing an object that interacts with a messaging API, we would mock the messaging connection, rather than write a test that requires connectivity with a messaging broker.

Typemock’s Isolator++ is a C++ mocking library for Windows and Linux. With it, you can fake any class or template and any method, without changing your existing code.

The Google Test project bundles Google Mock with the test runner. You can use it to mock C++ classes and templates but the library has limitations that make working with concrete and free functions difficult.

You can read a comparison of Isolator++ and Google Mock here.

Use Dependency Injection

If you are writing new code, or able to change legacy code, dependency injection (DI) is your friend. While you may only associate DI with Java, it’s available in C++, too. There’s even a Boost library in the works.

But you don’t need a framework, to use DI. If you don’t want to use an experimental library or move to a new version of C++ yet, you can still use DI in your C++ code.

The most basic form of DI is constructor injection. Fowler describes it in the article I linked above using a framework for Java, but it’s easy enough to roll your own if you don’t want to add a new external dependency.

DI means separating the creation of an object from its use. This pattern makes it easy to replace the implementation of a service with a new one. Fowler even uses the term plugin to describes injected objects in his post.

But DI makes testing your code easier too. You can pass a fake to an object on initialization and then use it to observe the object under test. Dependency injection makes isolating objects for verification easy.

If you are working with complicated and tightly coupled legacy code, a mocking framework like Isolator++ can help you create tests. The framework is ideal for adding tests to legacy code. But writing your code with a pattern like DI will always make things easier and refactoring that legacy code might be a good idea too.

Write AAA Tests

Tests that are hard to decipher are almost as bad as no tests at all. (But only almost.)

Like clean code, there’s such a thing as clean tests. Clean tests are easy to read and focus on a single piece of behavior.

One way to make sure your tests are easy to comprehend is implementing the three A’s: Arrange, Act, and Assert. If you structure your tests with this in mind, they will be coherent and comprehensive.

Arrange gathers the test requirements and prepares them for the test. It’s where you create the object to test and set any preconditions it needs. It’s also where you create your mocks.

Act is the test operation itself. It’s usually the short part of the test. Ideally, it’s where you call a single function or method.

Assert verifies that the test succeeded. It’s where you might see a list of assert statements that check the state of different variables.

The Mandate of Unit Testing With C++

There’s no reason to write C++ code without unit tests. The patterns and practices associated with testing are all possible with the language because, as we all know, there is nothing you can do with another language that you can’t with C++. Most other languages are implemented in C++ under the covers.

So, get started today! Isolator++ supports C++ on Windows and Linux and contains features that will help get you started with unit testing right away. Download a trial license today.

 

unsplash-logoPhoto Credit: Dawid Zawiła

Time To Slit Some Throats (Not a Political Post)

Here’s a quote you may have heard before:

“Every normal man must be tempted, at times, to spit on his hands, hoist the black flag, and begin slitting throats.”

It’s from H.L. Mencken. I see it used often.  What I don’t often hear though is that it’s from an essay titled “The New Poetry Movement,” and is part of a reference to Ezra Pound. I’ve read it a few times, and I’m still not sure whether he wants to slit Pound’s throat or help him slit the throats of the new movement.

Mencken was a colorful guy. The quote immediately evokes an image of pirates storming a merchant ship, swords unsheathed and knives in between teeth.

While Mencken was referring to poetry the quote has taken on a life of its own. Most use it in the context of getting serious. Of rolling up one’s sleeves, to borrow another metaphor. Kicking ass and taking names, to borrow another.

I have mixed feelings about the imagery, to be honest. On the one hand, I read it in the context of motivation and it works for me in a “let’s blast Van Halen’s ‘Fair Warning’ and hit the gym kinda way. On the other, killing people shouldn’t be motivating.

Should it?

Either way, it’s a New Year and this one is going to be full of big changes.

Time to slit some throats.

5 Server Monitoring Tools you should check out

I wrote this post on server monitoring tools for Raygun a couple of months ago. You can check out the original here.

 

You work on your software’s performance. But let’s face it: production is where the rubber meets the road. If your application is slow or it fails, then nothing else matters.

Are you monitoring your applications in production? Do you see errors and performance problems as they happen? Or do you only see them after users complain? Worse yet, do you never hear about them?

What tools do you have in place for tracking performance issues? Can you follow them back to their source?

Today, we’re talking about server monitoring tools. Without software that manages your critical applications in real time, you’re flying blind—and maybe losing customers. After all, when a user encounters a non-responsive application, they leave.

When we talk about application performance monitoring (APM), we’re talking about a critical core competency. Mean time to recovery (MTTR) is a crucial metric for web service providers. And so keeping your MTTR under control is impossible without the proper tools.

There are plenty of server-monitoring tool packages out there, and their APM offerings vary widely. Different packages have tradeoffs, and selecting the right product can be difficult.

So which one suits your infrastructure best? To help you decide, we’ve assembled a list of your best five options. We’ll compare their features and their pros and cons. Hopefully, you’ll leave knowing which is right for your company.

1. Raygun’s Integrated Platform

Raygun Logo

Raygun offers an integrated solution. With it, you can see a complete picture of your system’s health in one place. When you integrate Raygun’s platform into your application, you can monitor your software stack from inside your server. And that means right up to each of your individual users.

Application Performance Management Tools

Raygun’s APM gives you real-time and trend data about server performance and also about user experience. You can see your page load data in APM flame charts, too. The charts are navigable timelines that you can use to identify performance bottlenecks. At the same time, you’ve also got stack traces for page loads and other activities. These are made available as call trees, similar to those found in a traditional profiler.

That’s not all. Raygun’s APM boasts tight integration with GitHub. It links call traces directly to source files in the repository. Because it can cross-reference performance issues directly back to code, you’ll spend less time isolating and repairing problems. Presently, the APM tools only support GitHub, but interfaces for GitLab and Bitbucket will be available soon.

In addition, Raygun’s APM offers a built-in detection system. It creates actionable issues, in real time, for problems that impact end-user experience. Each issue then contains a workflow that DevOps team members can use to resolve those issues.

APM works with .NET servers. Support for other languages such as Java and Ruby are coming soon.

Server Monitoring Software

Raygun gives you crash reporting and real user monitoring. These two things work together to provide complete coverage of server health and user experience.

Raygun’s crash reporting integrates into server software. It captures complete diagnostics for errors. That includes stack traces, methods/functions, class names, OS versions, and other relevant details. And the console also groups multiple occurrences of the same issue. That grouping makes it easier for you to report and resolve common bugs.

You can access crash information via a robust reporting system. This system is customizable, letting you filter for dates, software versions, error groups, class names, methods, and more. Crash reporting also supports an exhaustive list of languages and frameworks. We’re talking Angular, React, Ember, .NET, Android, Java, PHP, and many more.

“Real user monitoring” is exactly what its name implies. Raygun lets you monitor all user sessions. Both performance and session duration data is available, and it can be broken down by user agent.

Raygun displays session information in waterfall graphs with “hotspots” highlighted, emphasizing opportunities for improvement. And the configurable dashboard displays complete information for every session. That means you’ll have crash reports, session duration stats, and information about slow page and component loading, making it easier to isolate bugs.

2. Monitis

Monitis server tools logo

Monitis is a SaaS offering that will have your monitoring up and running in minutes. It provides you with custom plans based on the number of nodes in your network and the type of monitoring you want.

Monitis primarily focuses on traditional network and server operating system monitoring. Application monitoring is possible via log scraping and their API. You can use the API to report statistics to the Monitis console, but adding this integration means you’ll need to write code for it.

Server Monitoring Tools from Monitis

Monitis has native agents for monitoring Linux and Windows servers. The agents can report on memory, storage, network, and agents, and they can do it as often as once per minute.

The system can also monitor log files for errors and specific message text. While the agent will monitor system logs, application logs require an extra logging “plugin.”

Application Performance Monitoring

Monitis can monitor Java-based web applications via JMeter scripts. And the server monitoring tools will execute those scripts at 15-minute intervals. But there’s a potential downside here: there’s no support for continuous application monitoring.

If you need support for platforms and languages other than Java, though, don’t worry. Monitis has proprietary SDKs for Perl, Python, PHP, Ruby, and C#. With these interfaces, you can publish statistics for both graphing and alerts to the monitoring system. Naturally, this requires defining the statistics development effort.

Monitis’ real user monitoring tracks page views, page load times, page build performance, and other user statistics. But it only supports browser clients. If you’re looking for mobile application support, you’ll have to look elsewhere.

3. Zabbix

Zabbix server tools logo

Zabbix is an open-source monitoring platform, and you can download and install it yourself. If you’d like, Zabbix can even consult with you, creating a turnkey solution for your needs. They have a cloud-based SaaS offering in beta, but that doesn’t have commercial support yet.

Zabbix Server Monitoring

Zabbix supports a wide variety of server infrastructures. Depending on your network topology, it will configure itself via “auto-discovery.” You’ll find this capability useful for server hardware and network infrastructure. But most server platforms require additional configuration. Something else to keep in mind—Zabbix has an operating system agent that can be configured for some application monitoring.

The Zabbix server tools detect problems via log scraping and network monitors. It can also use check you define yourself, like automated web server requests or IP address pings.

Zabbix Application Performance Monitoring

Similar to Monitis, Zabbix provides an API for adding monitoring and metrics to your application. However, their API is a set of REST endpoints on the Zabbix server. REST support means platform and language independence, which is good. However, the burden is on the client to define monitoring criteria and implement the data flows.

Zabbix can also monitor Java applications via JMX with its Java Gateway. But there’s no native support for other platforms or languages.

Rather than tracking users, Zabbix can emulate one via user-defined web requests. Users define requests and response criteria. That criteria may include download speed, response time, HTTP response code, and the occurrence of a string in query results. You can schedule the requests for pre-defined intervals.

With Zabbix, keep in mind that there’s no explicit support for mobile clients. Zabbix can only track mobile clients that make web or REST requests. It can’t collect performance characteristics for different browsers.

4. New Relic

New Relic server tools

New Relic is a cloud-based APM and server monitoring platform.

APM with New Relic

With New Relic’s APM tools, you’ll have automatic instrumentation for Java, Node.js, and several other languages besides. But if you’re a .NET shop, beware—it’s not supported.

New Relic monitors web and mobile users. But it supplies the capabilities in two distinct modules. This means separate installations, configurations, and billing. Frankly, if you’re looking for simplicity, then New Relic might not be the way to go. It’s six different products with individual licenses and costs.

New Relic’s Server Monitoring Tools

The server monitoring tools log exceptions to a dashboard alongside graphs for errors and error rates. However, you can only see exceptions in stack traces. You won’t find links to source code control.

New Relic server monitoring requires an agent that publishes data to its systems. These systems provide agents for major Linux distributions and recent Microsoft Windows Server versions.

One potential drawback to keep in mind: if the agents can’t be installed on a system, it can’t be monitored. And because New Relic has different ways of handling each product and language, you might be adding complexity to deployments.

5. Datadog

Datdog

Datadog is another SaaS monitoring service for applications and infrastructure. It supports performance monitoring for both web applications. But its primary focus is system monitoring.

Datadog’s Application Performance Monitoring

Datadog monitors servers via an open-source agent. For Linux and Windows, Datadog packages the agent with a “trace agent” that adds APM capabilities. Similar to New Relic, platforms that can’t install the agent are limited to log scraping.

In addition to the agent, you’ll need to instrument your applications to enable tracing. Datadog only supports Golang, Java, Python, and Ruby. Several other languages have unsupported libraries.

You can use the management console to trace Instrumented applications running on systems with Datadog’s agent. Note that Datadog does not support continuous tracing. The system only stores periodic samples of application activity.

Datadog’s Server Monitoring Software

Datadog’s initial focus was infrastructure monitoring. So unsurprisingly, the platform supports plenty of infrastructure integrations. However, as I mentioned above, it can only monitor server hosts where its agent is installed.

With that, let’s talk about some downsides to Datadog. To be sure, it places a strong emphasis on metrics and real-time monitoring. But you’ll find that Datadog’s reporting capabilities are limited, at least when you compare it to other products. There are no real-time reports—only historical. And the options for customizing them are limited.

Another thing to keep in mind is that Datadog is a complicated system with a steep learning curve. You might need to invest considerable time and effort before you can use it to its full potential.

Which provider is right for you?

Selecting the right server monitoring tools is important, but you already knew that. What you really needed to learn was the best tools available to you, complete with their pros and cons. As you saw, each platform has its advantages and disadvantages. But one thing’s certain. If you’re not monitoring your servers and using APM, then you’re falling behind your competitors.

 

unsplash-logoPhoto Credit: Thomas Kvistholt

Do The Thing (Not A Motivational Message)

Just shut up and do it.

The proceeding sentence is the complete message, but Yoast SEO tells me I need more words. So here goes.

In 1899 Elbert Hubbard wrote “A Message To Garcia.” He published it in The Philistine (yes, really) as filler. But, it caught the eye of a railroad executive who gave it wider circulation. It caught the executive’s eye because a shallow reading of the essay might come away with the impression that Message is an entertaining screed about lazy employees that can’t follow orders.

But it’s so much more than that. It’s about doing things. Getting them done.

Here’s a sample:

The point I wish to make is this: McKinley gave Rowan a letter to be delivered to Garcia; Rowan took the letter and did not ask, “Where is he at?”

By the Eternal! There is a man whose form should be cast in deathless bronze and the statue placed in every college of the land. It is not book-learning young men need, nor instruction about this and that, but a stiffening of the vertebrae which will cause them to be loyal to a trust, to act promptly, concentrate their energies: do the thing- “Carry a message to Garcia!”

Hubbard (no relation to L. Ron Hubbard, general -purpose fabulist and founder of Scientology) wants to erect a statue to a guy that was given a letter and delivered it without any questions or excuses. He’s a real kidder.

It’s encouraging to read a message from 1899 that’s complaining about millennials and their refusal to get things done. The next time someone tells you about how uniquely lazy and stupid your generation is, you can give them this PDF.

Hubbard’s broader point is “do the thing.”

Stop thinking. Put down the book. Turn off the TED Talk. Close the browser window. Pause the podcast.

Deliver. Execute. Get it done.

The world will better off with one less slogan and one more finished project.

Photo by Elena Loshina on Unsplash

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑