Time To Slit Some Throats (Not a Political Post)

Here’s a quote you may have heard before:

“Every normal man must be tempted, at times, to spit on his hands, hoist the black flag, and begin slitting throats.”

It’s from H.L. Mencken. I see it used often.  What I don’t often hear though is that it’s from an essay titled “The New Poetry Movement,” and is part of a reference to Ezra Pound. I’ve read it a few times, and I’m still not sure whether he wants to slit Pound’s throat or help him slit the throats of the new movement.

Mencken was a colorful guy. The quote immediately evokes an image of pirates storming a merchant ship, swords unsheathed and knives in between teeth.

While Mencken was referring to poetry the quote has taken on a life of its own. Most use it in the context of getting serious. Of rolling up one’s sleeves, to borrow another metaphor. Kicking ass and taking names, to borrow another.

I have mixed feelings about the imagery, to be honest. On the one hand, I read it in the context of motivation and it works for me in a “let’s blast Van Halen’s ‘Fair Warning’ and hit the gym kinda way. On the other, killing people shouldn’t be motivating.

Should it?

Either way, it’s a New Year and this one is going to be full of big changes.

Time to slit some throats.

5 Server Monitoring Tools you should check out

I wrote this post on server monitoring tools for Raygun a couple of months ago. You can check out the original here.

 

You work on your software’s performance. But let’s face it: production is where the rubber meets the road. If your application is slow or it fails, then nothing else matters.

Are you monitoring your applications in production? Do you see errors and performance problems as they happen? Or do you only see them after users complain? Worse yet, do you never hear about them?

What tools do you have in place for tracking performance issues? Can you follow them back to their source?

Today, we’re talking about server monitoring tools. Without software that manages your critical applications in real time, you’re flying blind—and maybe losing customers. After all, when a user encounters a non-responsive application, they leave.

When we talk about application performance monitoring (APM), we’re talking about a critical core competency. Mean time to recovery (MTTR) is a crucial metric for web service providers. And so keeping your MTTR under control is impossible without the proper tools.

There are plenty of server-monitoring tool packages out there, and their APM offerings vary widely. Different packages have tradeoffs, and selecting the right product can be difficult.

So which one suits your infrastructure best? To help you decide, we’ve assembled a list of your best five options. We’ll compare their features and their pros and cons. Hopefully, you’ll leave knowing which is right for your company.

1. Raygun’s Integrated Platform

Raygun Logo

Raygun offers an integrated solution. With it, you can see a complete picture of your system’s health in one place. When you integrate Raygun’s platform into your application, you can monitor your software stack from inside your server. And that means right up to each of your individual users.

Application Performance Management Tools

Raygun’s APM gives you real-time and trend data about server performance and also about user experience. You can see your page load data in APM flame charts, too. The charts are navigable timelines that you can use to identify performance bottlenecks. At the same time, you’ve also got stack traces for page loads and other activities. These are made available as call trees, similar to those found in a traditional profiler.

That’s not all. Raygun’s APM boasts tight integration with GitHub. It links call traces directly to source files in the repository. Because it can cross-reference performance issues directly back to code, you’ll spend less time isolating and repairing problems. Presently, the APM tools only support GitHub, but interfaces for GitLab and Bitbucket will be available soon.

In addition, Raygun’s APM offers a built-in detection system. It creates actionable issues, in real time, for problems that impact end-user experience. Each issue then contains a workflow that DevOps team members can use to resolve those issues.

APM works with .NET servers. Support for other languages such as Java and Ruby are coming soon.

Server Monitoring Software

Raygun gives you crash reporting and real user monitoring. These two things work together to provide complete coverage of server health and user experience.

Raygun’s crash reporting integrates into server software. It captures complete diagnostics for errors. That includes stack traces, methods/functions, class names, OS versions, and other relevant details. And the console also groups multiple occurrences of the same issue. That grouping makes it easier for you to report and resolve common bugs.

You can access crash information via a robust reporting system. This system is customizable, letting you filter for dates, software versions, error groups, class names, methods, and more. Crash reporting also supports an exhaustive list of languages and frameworks. We’re talking Angular, React, Ember, .NET, Android, Java, PHP, and many more.

“Real user monitoring” is exactly what its name implies. Raygun lets you monitor all user sessions. Both performance and session duration data is available, and it can be broken down by user agent.

Raygun displays session information in waterfall graphs with “hotspots” highlighted, emphasizing opportunities for improvement. And the configurable dashboard displays complete information for every session. That means you’ll have crash reports, session duration stats, and information about slow page and component loading, making it easier to isolate bugs.

2. Monitis

Monitis server tools logo

Monitis is a SaaS offering that will have your monitoring up and running in minutes. It provides you with custom plans based on the number of nodes in your network and the type of monitoring you want.

Monitis primarily focuses on traditional network and server operating system monitoring. Application monitoring is possible via log scraping and their API. You can use the API to report statistics to the Monitis console, but adding this integration means you’ll need to write code for it.

Server Monitoring Tools from Monitis

Monitis has native agents for monitoring Linux and Windows servers. The agents can report on memory, storage, network, and agents, and they can do it as often as once per minute.

The system can also monitor log files for errors and specific message text. While the agent will monitor system logs, application logs require an extra logging “plugin.”

Application Performance Monitoring

Monitis can monitor Java-based web applications via JMeter scripts. And the server monitoring tools will execute those scripts at 15-minute intervals. But there’s a potential downside here: there’s no support for continuous application monitoring.

If you need support for platforms and languages other than Java, though, don’t worry. Monitis has proprietary SDKs for Perl, Python, PHP, Ruby, and C#. With these interfaces, you can publish statistics for both graphing and alerts to the monitoring system. Naturally, this requires defining the statistics development effort.

Monitis’ real user monitoring tracks page views, page load times, page build performance, and other user statistics. But it only supports browser clients. If you’re looking for mobile application support, you’ll have to look elsewhere.

3. Zabbix

Zabbix server tools logo

Zabbix is an open-source monitoring platform, and you can download and install it yourself. If you’d like, Zabbix can even consult with you, creating a turnkey solution for your needs. They have a cloud-based SaaS offering in beta, but that doesn’t have commercial support yet.

Zabbix Server Monitoring

Zabbix supports a wide variety of server infrastructures. Depending on your network topology, it will configure itself via “auto-discovery.” You’ll find this capability useful for server hardware and network infrastructure. But most server platforms require additional configuration. Something else to keep in mind—Zabbix has an operating system agent that can be configured for some application monitoring.

The Zabbix server tools detect problems via log scraping and network monitors. It can also use check you define yourself, like automated web server requests or IP address pings.

Zabbix Application Performance Monitoring

Similar to Monitis, Zabbix provides an API for adding monitoring and metrics to your application. However, their API is a set of REST endpoints on the Zabbix server. REST support means platform and language independence, which is good. However, the burden is on the client to define monitoring criteria and implement the data flows.

Zabbix can also monitor Java applications via JMX with its Java Gateway. But there’s no native support for other platforms or languages.

Rather than tracking users, Zabbix can emulate one via user-defined web requests. Users define requests and response criteria. That criteria may include download speed, response time, HTTP response code, and the occurrence of a string in query results. You can schedule the requests for pre-defined intervals.

With Zabbix, keep in mind that there’s no explicit support for mobile clients. Zabbix can only track mobile clients that make web or REST requests. It can’t collect performance characteristics for different browsers.

4. New Relic

New Relic server tools

New Relic is a cloud-based APM and server monitoring platform.

APM with New Relic

With New Relic’s APM tools, you’ll have automatic instrumentation for Java, Node.js, and several other languages besides. But if you’re a .NET shop, beware—it’s not supported.

New Relic monitors web and mobile users. But it supplies the capabilities in two distinct modules. This means separate installations, configurations, and billing. Frankly, if you’re looking for simplicity, then New Relic might not be the way to go. It’s six different products with individual licenses and costs.

New Relic’s Server Monitoring Tools

The server monitoring tools log exceptions to a dashboard alongside graphs for errors and error rates. However, you can only see exceptions in stack traces. You won’t find links to source code control.

New Relic server monitoring requires an agent that publishes data to its systems. These systems provide agents for major Linux distributions and recent Microsoft Windows Server versions.

One potential drawback to keep in mind: if the agents can’t be installed on a system, it can’t be monitored. And because New Relic has different ways of handling each product and language, you might be adding complexity to deployments.

5. Datadog

Datdog

Datadog is another SaaS monitoring service for applications and infrastructure. It supports performance monitoring for both web applications. But its primary focus is system monitoring.

Datadog’s Application Performance Monitoring

Datadog monitors servers via an open-source agent. For Linux and Windows, Datadog packages the agent with a “trace agent” that adds APM capabilities. Similar to New Relic, platforms that can’t install the agent are limited to log scraping.

In addition to the agent, you’ll need to instrument your applications to enable tracing. Datadog only supports Golang, Java, Python, and Ruby. Several other languages have unsupported libraries.

You can use the management console to trace Instrumented applications running on systems with Datadog’s agent. Note that Datadog does not support continuous tracing. The system only stores periodic samples of application activity.

Datadog’s Server Monitoring Software

Datadog’s initial focus was infrastructure monitoring. So unsurprisingly, the platform supports plenty of infrastructure integrations. However, as I mentioned above, it can only monitor server hosts where its agent is installed.

With that, let’s talk about some downsides to Datadog. To be sure, it places a strong emphasis on metrics and real-time monitoring. But you’ll find that Datadog’s reporting capabilities are limited, at least when you compare it to other products. There are no real-time reports—only historical. And the options for customizing them are limited.

Another thing to keep in mind is that Datadog is a complicated system with a steep learning curve. You might need to invest considerable time and effort before you can use it to its full potential.

Which provider is right for you?

Selecting the right server monitoring tools is important, but you already knew that. What you really needed to learn was the best tools available to you, complete with their pros and cons. As you saw, each platform has its advantages and disadvantages. But one thing’s certain. If you’re not monitoring your servers and using APM, then you’re falling behind your competitors.

 

unsplash-logoPhoto Credit: Thomas Kvistholt

Do The Thing (Not A Motivational Message)

Just shut up and do it.

The proceeding sentence is the complete message, but Yoast SEO tells me I need more words. So here goes.

In 1899 Elbert Hubbard wrote “A Message To Garcia.” He published it in The Philistine (yes, really) as filler. But, it caught the eye of a railroad executive who gave it wider circulation. It caught the executive’s eye because a shallow reading of the essay might come away with the impression that Message is an entertaining screed about lazy employees that can’t follow orders.

But it’s so much more than that. It’s about doing things. Getting them done.

Here’s a sample:

The point I wish to make is this: McKinley gave Rowan a letter to be delivered to Garcia; Rowan took the letter and did not ask, “Where is he at?”

By the Eternal! There is a man whose form should be cast in deathless bronze and the statue placed in every college of the land. It is not book-learning young men need, nor instruction about this and that, but a stiffening of the vertebrae which will cause them to be loyal to a trust, to act promptly, concentrate their energies: do the thing- “Carry a message to Garcia!”

Hubbard (no relation to L. Ron Hubbard, general -purpose fabulist and founder of Scientology) wants to erect a statue to a guy that was given a letter and delivered it without any questions or excuses. He’s a real kidder.

It’s encouraging to read a message from 1899 that’s complaining about millennials and their refusal to get things done. The next time someone tells you about how uniquely lazy and stupid your generation is, you can give them this PDF.

Hubbard’s broader point is “do the thing.”

Stop thinking. Put down the book. Turn off the TED Talk. Close the browser window. Pause the podcast.

Deliver. Execute. Get it done.

The world will better off with one less slogan and one more finished project.

Photo by Elena Loshina on Unsplash

Log4j2 Configuration: A Detailed Guide to Getting Started

I wrote a detailed guide about using and configuring Log4j2 for Scalyr a while back. Here it is.

 

We covered basic logging for Java applications a while back. In that tutorial, we used log4j version 2, a logging framework from the Apache project. Let’s go one step further with Java application logging and look at log4j2 configuration.

Log4j’s capabilities have made it one of Java’s most popular logging frameworks. It can be configured for multiple logging destinations and a variety of log file formats. Log messages can be filtered and directed at the individual class level, giving developers and operations personnel granular control over application messages.

Let’s examine these mechanisms by configuring log4j with a command line Java application.

Sample Application

Let’s start where we left off in the previous tutorial, with an application that logs with log4j.

This is similar to the application at the end of the previous post, with a few additional logging statements. We’re logging the same message at each of log4j’s predefined logging levels: trace, debug, info, warn, error, and fatal.

We will be using log4j’s YAML file format, so you’ll need to add a few additional dependencies to your pom.xml (or build.gradle).

Set this code up so you can build and run it using your favorite Java tools.

Essential Log4j2 Configuration

Default Configuration

Let’s run our application without a log4j configuration file. If you already have one, delete it or move it to another file name so that log4j will ignore it.

When we run the application, we see this on the console:

Two of the six log messages, the ones specified as “error” and “fatal,” are sent to the console.

Log4j has a default configuration. It will log to the console, showing messages classified as “error” or higher.

Knowing how log4j will behave without a configuration file is useful, but let’s look at how to set it up for our needs.

Configuration File Location

We can provide log4j with a configuration file in a specific location via the log4j.configurationFile system property. This is the first place it will look for a configuration file.

If log4j cannot find the system property, it looks for a file in the classpath. Since log4j version 2 supports four different file formats and two different file naming conventions, the rules for locating a file are complicated. We’ll go over them after we cover the different options.

Configuration File Formats

Log4j will load Java properties and YAML, JSON, and XML configuration files. It identifies the file format by examining the file extension.

  1. Java properties — .properties
  2. YAML — .yaml or .yml
  3. JSON — .json or .jsn
  4. XML — .xml

A file specified by the log4j.configurationFile system property must have one of these file extensions but can have any base name. Log4j will parse it based on the format indicated by the extension.

When log4j scans the classpath for a file, it scans for each format in the order listed above and stops when it finds a match. For example, if it finds a YAML configuration, it will stop searching and load it. If there is no YAML file but it finds JSON, it will stop searching and use it instead.

Configuration File Names

When log4j scans the classpath, it looks for one of two filenames: log4j2-test.[extension] or log4j2.[extension]. It loads test files first, giving developers a convenient mechanism for forcing an application to log at debug or trace level without altering the standard configuration.

Scanning for Configuration

When we put the rules for file formats and names together, we can see log4j’s algorithm for configuring itself.

If any of the following steps succeeds, log4j will stop and load the resulting configuration file.

  1. Check for the log4j.configurationFile system property and load the specified file if found.
  2. Search for log4j2-test.properties in the classpath.
  3. Scan classpath for log4j2-test.yaml or log4j2-test.yml
  4. Check for log4j2-test.json or log4j2-test.jsn
  5. Search for log4j2-test.xml
  6. Look for log4j2.properties
  7. Search for log4j2.yaml or log4j2.yml
  8. Scan classpath for log4j2.json or log4j2.jsn
  9. Check for log4j2.xml
  10. Use the default configuration.

Practice Proper Configuration File Hygiene

There are 12 potential configuration file names for log4j. Loading the wrong one can lead to lost logging information or diminished performance if an application logs unnecessary messages in a production environment.

Before deploying code, make sure your application has one and only one configuration file and that you know where it is. If you insist on loading configuration from the classpath, scan for spurious files before releasing your code.

Basic Configuration

Now that we know how to supply a configuration to log4j, let’s create one and use it to customize our application.

Log4j’s Default Configuration Revisited

Let’s start with the default configuration and modify our application’s behavior from there. We’ll take the hint from log4j’s configuration rules and use YAML.

The default configuration looks like this:

Create a file name log4j2.yaml with these contents and set log4j.configurationFile to point to its location.

Next, run the application. You’ll see the same output as before.

We’ve taken control of our application’s logging configuration. Now let’s improve it.

Log File Location

The first step is to get our logs off of the console and into a file. To do this, we need to understand appenders.

Appenders put log messages where they belong. The default configuration supplies a console appender. As the name suggests, it appends messages to the console.

We want a file appender. Let’s replace our console appender.

File appenders have a name, just like console appenders. But instead of a target, they have a fileName.

Similar to console appenders, they also have a PatternLayout, which we will cover below.

The name is not just for show. If we want to replace the console appender with the file appender, we need to let our logger know where to put our log messages.

So change the ref value in the logger to the file appender’s name.

Now, rerun the application. Instead of logging to the console, it places the messages in a file named logfile.log in the working directory. We’ve moved our logs to a file!

Logging Levels

We still only saw two of our six log messages in our log file. Let’s talk about loggers and how they manage log messages.

Our basic configuration defines a single logger.

It has a level of “error,” so it only prints messages that are errors or fatal.

When a logger receives a log message, it passes it on or filters it based on its configured level. This table shows the relationship between logger configuration and log message level.

Event Level Message Will Be Logged = X
TRACE DEBUG INFO WARN ERROR FATAL OFF
ALL X X X X X X
TRACE X
DEBUG X X
INFO X X X
WARN X X X X
ERROR X X X X X
FATAL X X X X X X
OFF

So if we change the level of our logger, we’ll see more messages. Set it to “debug.”

Next, rerun the program. The application logs all of the messages that are debug level or higher.

Logger Hierarchy

Log4j arranges loggers in a hierarchy. This makes specifying a different configuration for individual classes possible.

Let’s change our application and see this in action.

We’ve added an inner class that creates a logger and logs a message with it.

After Main does its logging, it calls LoggerChild.

If we run this with our current config, we see the new message and that it’s logged from a different class.

Loggers have a class hierarchy similar to Java’s. All loggers are descendants of the Root logger we’ve been working with so far. Loggers that lack any specific configuration inherit the Root configuration.

So when Main and LoggerChild create loggers using their class name, these loggers inherit Root’s configuration, which is to send debug level and higher messages to the File_Appender.

We can override this specifying configuration for the two loggers.

Loggers are named in the logger section. Since we’re listing two, we use the YAML array syntax.

We set com.company.Main’s logger to “error” and com.company.Main.LoggerChild’s to “debug.”

The additivity setting controls whether or not log4j will send messages from a logger’s ancestor to a descendant. If set to true, both loggers will process the same message. Some systems want to add the same message to two different logs. We don’t want this behavior, so we’ve overridden the default and specified false.

Now run the program again:

We only saw the error message from Main but still saw the debug message from LoggerChild!

More Than One Appender

Just like we can have more than one logger, we can have more than one appender.

Let’s make a few changes to our configuration.

Add a second file appender. To do this, create a list with the original appender and the second one with a different name and file. Your Appenders section should look like this:

Next, point the LoggerChild logger at the new appender. Your Loggers section will look like this.

Now run the application and you’ll see two different log files, each with the messages from their associated classes.

Log Message Formatting

Each of our appenders has a PatternLayout.

PatternLayout is an instance of a Log4j layout class. Log4j has built-in layouts for logging messages in CSV, JSON, Syslog, and a variety of different formats.

PatternLayout has a set of operators for formatting messages that operates similarly to C’s sprintf function. By specifying a pattern, we control the format of log messages when they are written by the appender.

Our layout string looks like this:

Each % corresponds to a field in a log message.

Format Specifier Description
%d{HH:mm:ss.SSS} date as hour :minute “seconds . milliseconds
%t thread name
%–5level log level, right-padded to 5 spaces
%logger{36} logger name, up to 36 package levels deep
%msg log message
%n carriage return

There are many additional operators for PatternLayout.

Variable Replacement

Configuration files can become repetitive as appenders and loggers multiply. Log4j supports variable substitution to help reduce repetition and make them easier to maintain. Let’s refine our configuration with the use of Properties.

At the top of the file, we declared two Properties, one named LogDir and another DefaultPattern.

After declaring a property, it can be used in the configuration using braces and a dollar sign: ${LogDir} or ${DefaultPattern}

LogDir is a subdirectory name we added to the names of the two log files. When we run the application, log4j will create this directory and place the log files there.

We specified DefaultPattern as the pattern layout for our two log files, moving the definition to one place. If we want to modify our log file format, we only have to worry about changing it once now.

Log4j can also import properties from the environment. You can find the details here.

For example, if we want to import the log file directory from a Java system property we specify it as ${sys:LogDir} in the log4j configuration and set a LogDir system property to the desired directory.

Automatic Reconfiguration

Log4j can reload its configuration at a periodic interval, giving us the ability to change an application’s logging configuration without restarting it.

Add the monitorInterval setting to the Configuration section of the file and log4j will scan the file at the specified interval.

The interval is specified in seconds.

Conclusion

Log4j is a powerful logging framework that allows us to configure our applications to log in a variety of different ways, with granular control over how different components use log files. This tutorial covered the basic aspects of configuring log4j, but there’s much more to learn. You can learn about log4j configuration on the project’s website.

To learn more about Java logging and logging strategies in general, you’re already in the right place! Scalyr’s blog has many more tutorials and reference guides like this one.

Scalyr offers a log aggregation tool, which means that once you have lots of log files and data, they’ll help you organize, search, and make sense of all these data. So stay tuned for more!

 

unsplash-logoPhoto credit: Jacob Miller

REST API: Your Guide to Getting Started Quickly

Here a post I wrote for Stackify a while ago. You can find the original here.

 

Even though REpresentational State Transfer, also known as REST, is often referred to as a protocol, it’s an architectural style. It defines how applications communicate over the Hypertext Transfer Protocol (HTTP). Applications that use REST are loosely-coupled and transfer information quickly and efficiently. While REST doesn’t define data formats, it’s usually associated with exchanging JSON or XML documents between a client and a server.

We’re going to use a simple service and a web browser to learn about the fundamentals of REST.

Setup

For this tutorial, you’ll need a system with Docker installed. You can find instructions for your computer here.

First, follow the instructions and install Docker.

Then, once you’ve completed the installation, you can download and run our sample REST server.

Finally, start the server with this command:

Docker downloads the server image and runs it. The command tells it to make the web server available on port 8080.

So, point your browser here:

http://127.0.0.1:8080/swagger-ui.html

If everything is working, you’ll see a web page like this:

REST API Tutorial Main Page

This is a Swagger page that documents the REST API published by this server. We’ll use it to demonstrate how REST APIs are consumed by applications.

Introduction to Swagger and REST

Click on the word tutorial-controller toward the bottom of the page. It will expand, and you’ll see this:

REST API Overview

Let’s try a simple request before we start examining each aspect of the API.

Next, click on the blue GET box under tutorial-controller. 

REST API Tutorial Get All

This is a description of the Get All Employees API method. The box documents the status codes it returns, the content type, and the API path. We’ll cover this in detail as we go. Let’s make a request.

Click the Try it out! button at the bottom right of the blue shaded area.

REST API Tutorial Get All Result

Swagger made an API request for us and provides a detailed description of the result. We see three employee records.

Let’s get to work!

CRUD!

The name REpresentational State Transfer implies exchanging data. The server acts as a data store, and the client retrieves and stores data. The server transfers object states to the client. The client can update these states too.

Most REST APIs implement CRUD: Create, Retrieve, Update, and Delete.

Go back to the Swagger page and click on the blue GET box so it collapses. Here’s a quick tip: at the top of the page, there is the List Operations option. Clicking there will collapse the operations into a list again.

Let’s look at the list of operations again.

On the left-hand side of the page we see GET, POST, DELETE, GET, PATCH, and PUT. These are HTTP methods that correspond to operations.

We can map these operations into CRUD.

  • POST—Create
  • GET—Retrieve
  • PUT / PATCH—Update
  • DELETE—Delete

We’ll cover each operation as we take a look at how REST APIs work.

REST API Tutorial

Create

Let’s add a new employee to the list.

First, click on the green POST box.

REST API Tutorial Create Employee

In the top right corner, we see the API method name, Create Employee. This is documentation Swagger extracts from the application code. Many REST Servers use Swagger or a similar web application to document their APIs.

Next, we see information about how the method responds to requests. A successful request will yield an HTTP Created response code. This is a standard convention for REST APIs.

Under the response information are the request parameters. This API expects a JSON employee record. There’s an example in the box on the right.

Click on the example, and the page will populate the value box on the left for you.

Let’s edit the example.

Now, click the Try it out! button.

REST API Tutorial Create results

First, at the top of the response, Swagger tells us how we could make the same request using a command line tool called curl. If you are comfortable with the command line, you can try it out. I am going to omit that part of the responses going forward to save space.

Next, we see details about the request we made. The full URL was http://127.0.0.1:8080/api/tutorial/1.0/employees. As we’ll see, the URL is an essential part of a REST API.

Then we see the request headers and the response. The response was empty. If this API was more user-friendly, it might return the employee id instead of making us specify it.

Finally, we see the important part. The response code was 201, which corresponds to Created. The request succeeded!

Let’s go back to the request for all records. Click on the blue GET box on the top again. Then, click on the Try it out! button.

The response body for this request is the list of employee records the server currently has. Click in the box and you can scroll.

Here are the results:

The last record in the list is the new one we added.

Failure

Let’s try to add another employee.

Enter this in the value box:

Then click Try it out! again.

Our response code this time was 403, which corresponds to Forbidden. We can’t add an employee with the same id number.

Here again, a more friendly API might generate ids for us, but this example illustrates how a service enforces data integrity via response codes.

REST URLs

Now click on the POST box again to collapse it and click on the second blue one that says GET.

REST API Tutorial Retrieve

This is the API method for retrieving individual employee records.

First, let’s take a look at that URL: /api/tutorial/1.0/employees/{id}

It has the same base URL as the one for creating employees. All of the URLs in the API contain GET /api/tutorial/1.0/employees.

HTTP methods are what define operations in well-formed REST APIs, not the URLs. URLs should not contain verbs.

The difference now is that the id of the employee is part of the URL.

You can think of the URLs as forming an index for the records.

There are four records in the server right now:

  • /api/tutorial/1.0/employees/1
  • /api/tutorial/1.0/employees/2
  • /api/tutorial/1.0/employees/3
  • /api/tutorial/1.0/employees/99

So when we want to retrieve, modify, or delete a record, we operate on its URL using the correct HTTP method.

If we want to retrieve all records or add to the set, we operate on the top-level URL: /api/tutorial/1.0/employees.

REST URLs usually include versions. I’ve set up this API to be version 1.0: /api/tutorial/1.0/employees

This convention provides a migration path for applications. A server can offer two different API versions via different URLs. Client applications can migrate to the new API incrementally or all at once.

Retrieving Data

Now let’s request an employee. Enter 99 in the value box and click the Try it out! button.

We get back our new employee and a successful response code.

Let’s try an invalid id.

So enter 122 and click the button.

Now we get an error document, and the response code is 500?

500 means Internal Server Error. The error document shows that our server threw a NullPointerException. The service has a bug!

Well-designed REST APIs will handle errors gracefully. In this case, we received a code that makes it clear there is a problem. But if we try another request, the service will respond.

Separation Between Client and Server

We’ve been exchanging small JSON documents with our server.

Let’s change the contents a bit.

First, go back to the POST area.

Then, enter this in the value text area:

And submit it.

It worked! The server will accept employees with no phone number.

Scroll back up to the first GET box where we can retrieve a list of employees and click the Try it out! button.

The result includes the new record.

An important aspect of REST is the separation between client and server.

In CRUD, and therefore REST, the implementation of the client and server are independent. The data forms the demarcation point between the two applications.

In this example, we demonstrated that the server is forgiving when it comes to required and non-required fields in employee records. This is an example of loosely-coupled design.

But we can go further with the separation between client and server. I wrote this example server in Java. It could be replaced with one written in C#, Python, or Ruby. If the URLs and the document remain the same, the client doesn’t have to change.

Delete

Let’s remove our incomplete record. Close the POST box and open the DELETE area under it.

The DELETE API method looks a great deal like the GET method. The URL is the same, complete with the employee id. Only the HTTP operation differs.

REST API Tutorial Delete

Enter 122 in the value text area and make a request. You should receive a 200 response code for success.

Scroll back up the first GET box. Then click the Try it out! button.

The result is the same list of employees we saw earlier.

Mr. Stark has left the building.

Now go back to the DELETE request area and try an invalid request. Enter 123 for the id.

REST API Tutorial Invalid delete

Response code 204 means no content. Nothing was deleted because we used an id that doesn’t exist.

Stateless

REST APIs are stateless. The server does not need to know about the state of the client and vice versa.

The state of the data is transferred between the applications, just as the name indicates. We added an employee record. We deleted the record.

This decoupling between components has distinct advantages. Many modern web applications, including those on phones, use REST. The expense of keeping a connection open and synchronizing state between client and server is too great, especially for an application with thousands of users.

The disadvantage is that when you design your application, you have to be mindful of the possible synchronization issues between client and server.

Updating Records

Let’s add Anthony Stark back to the server again.

Now go back to POST and add this record.

The response code will be 201. So if you go and list all employees, the new record is there.

We’ll want to add a phone number for this record.

Got to the brown PUT option on the bottom. PUT requires two values, the id of the record to be updated and a document with the new values.

Enter 122 for the id.

Enter a complete record for the employee.

REST API Tutorial First Update

Click the button. The response code is 200.

If we list our records again, we see the phone number.

Since the id of the record to be updated is a parameter on the path, we can change a record’s id.

So let’s move Mr. Stark to record 100.

REST API Tutorial Second update

Next, click the button. The response code is 200.

Then, if we list our records again, we see the new id.

Now go back to the PUT area and enter the same request without any changes.

The response code is 204. We tried to modify record 122 again, and it no longer exists.

Patching Records

Submitting partial updates to records is a relatively new operation and is not supported in all APIs.

Some APIs support partial updates via the PUT method, but PATCH is the more technically correct method for partial updates.

So let’s add another incomplete record and then PATCH it.

Add this employee:

Then, use GET to retrieve employee id 123.

We have nulls for both the last name and phone. Let’s add this employee’s last name.

First, click on the PATCH box.

REST API Tutorial Patch option

The options are identical to a PUT, but we can submit partial records.

Next, enter a record with only a last name. Then, add the employee id of 123.

REST API Tutorial Patch Blake

Finally, click the submit button. We will then receive a successful response code.

So let’s retrieve the record again.

We can update a record by only specifying the fields we wish to add with a PATCH.

Response Codes

As you can see, it’s critical for client applications to handle HTTP status codes correctly.

Response codes contain three digits. The HTTP standard divides them into ranges.

Codes that begin with 2 indicate success.

A code beginning with 4 means a client error.

A server error begins with 5.

Response Code Description
200 Success
201 Created
204 No content
400 Bad request
403 Forbidden
404 Not found
500 Internal server error

The Created code is returned for a new record, while Success is returned for modifying, deleting, and retrieving records.

The Forbidden response code indicates a record can’t be created, while No content indicates a failed modification.

Well-architected APIs will document which code they return under each possible condition.

Conclusion

We’ve used a simple API to examine how REST is used to exchange object state between a client and a server.

While we were adding, removing, updating, and retrieving records, we also looked at how APIs use HTTP methods and response codes to differentiate operations and results.

REST is the lingua franca of the modern web. It’s a protocol used for browsers, phones, and IOT devices. Now that you have a basic understanding of how to use it, you’re ready to build the next big thing.

 

image credit: dorota dylka

A Guide to Getting Started Quickly with the Feature Toggle in Java

This is the first post I wrote for the Rollout blog. The original is here. I’ve had a lot of fun playing with the feature flag posts, and it’s something I plan on adding the enviornment at the day job soon.

 

Feature toggles (also known as feature flags) are simple. You want to introduce new behavior in a Java application, but you’re not ready to turn it on right away, or you only want to make it available for a subset of users. Maybe another application needs to be ready for the new feature; perhaps the business isn’t prepared to support it yet.

So you add a feature toggle to your configuration file or the command line, default it to “off,” and move on.

I don’t blame you if you think there’s something more to it. Sometimes changing configuration is as difficult as pushing new code, and slapping booleans into your code feels superficial.

You’re right. There is more. There’s feature flag management. Instead of using conditionals that will inevitably turn into technical debt, you can include feature flags as part of a strategy for improving your code and your ability to support it.

This post will tackle feature flag management in Java. We’ll start with a simple flag to control a new feature, and then we’ll integrate it into Rollout’s secure feature management system.

Basic Feature Toggles for Java

For this tutorial, I’m using a small application called SimpleXKCDClient. You can grab a local copy from here. It uses OkHttp to download JSON information about XKCD comics from the XKCD REST service. As the name implies, it’s simple and gives us a way to focus on feature toggles. We’ll use JUnit tests to illustrate how the toggles change the application behavior.

Here’s a Java representation of an XKCD comic:

Initially, the client retrieves my favorite comic from XKCD:

We can verify that we retrieved the correct comic with this unit test:

So let’s add a new feature. We want to load a different comic during the holidays.

Then we’ll add a new test that checks for a different title:

We’ve added our first feature flag. If holidaySeason is true, we retrieve a different comic.

An Introduction to Feature Flag Management

Of course, to activate the different behavior, we have to change the code, build, and deploy. That’s not a feature flag. We want to turn features on and off without touching any code.

Before fully delving into the management features that Rollout gives us, let’s make another small but still significant change to SimpleXKCDClient.

Getting the Configuration Out of Code

Getting the feature toggle out of code is the first step in managing it. Opening code and changing it is not only unmaintainable; it’s not toggling features. It’s deploying new releases.

Let’s create a configuration file:

We’ll call it application.properties.

Then let’s override the default constructor to load the configuration file:

Now we can edit the properties file to set holidaySeason to true or false and watch the results of our tests change.

Managing Feature Flags Centrally

We’ve done a limited version of feature flag management that requires distributing and modifying configuration files to toggle features  Let’s look at a better way.

Getting Started With Rollout

First, you’ll need to create a free Rollout account.

Sign in with your account and create a new app:

Create New App dialog on Rollout.io
Be sure to select Java and Java Server, as shown.

Next, you’ll see instructions for adding Rollout to your application:

Java Installation instructions on Rollout.io

Add the Rollout library to your dependencies. At the time of this writing, the current version of the library is 2.0.1. Here’s my Gradle.build:

Next, add the call to Rox.setup() to initialize the SDK, using your application key.

Back on the Rollout website, click Next for the prompt to build and run your application.

Build and Run DialogBuild and run it, and you’re rewarded after a few moments.Success Dialog

Implementing a Feature Flag

Now, let’s turn holidaySeason into a managed feature flag. Managed flags are RoxFlags:

They’re public members of a RoxContainerIn this example, holidaySeason is created with its default value disabled. RoxFlag will also accept a boolean value as an argument to its constructor as a default value.

Next, we can modify our constructor to create the container and register it.

Run the application again, and then look for your feature toggle in the Rollout dashboard:

Feature Toggle Dashboard on Rollout.io

It’s there!

In our application, we’re downloading a document via REST and exiting. Since the feature toggle determines which document we request, we want to update the value of our flag from Rollout before we make the REST request.

Rollout’s API is asynchronous, so we need to do a little bit of extra work to ensure that we have the correct flag setting before the REST request is initiated. Rollout has more information on how flags are updated here.

We can install a configuration fetched handler that Rollout will call after the configuration is received. By using this callback to set a CountdownLatch, we can block until we’re ready.

Let’s move the Rollout initialization to a private method and install the callback. Then we’ll call this new method from our constructor:

We’re ready to start setting our flag from the management console.

Managing a Feature Flag

We manage flags by adding them to experiments. An experiment is a scheme for controlling flags (and other variables) in production.

Click on Production in the left-hand side menu, and then click Experiments. This will bring up a screen with a Create Experiment button.  Click that, and then fill the new experiment window out appropriately.

New Experiment Dialog on Rollout.io
Select Set Audience.

Experiment Dashboard

And we see a console for setting flags to true, false, or split.

If we run our tests now, we see that holidaySeason is false.

Let’s change it to true.

When we run our tests again, the results are reversed!

We can change the behavior of our application without touching code or configuration files.

Before we wrap up, let’s take a look at the experiment on the console again. Flip the flag from true to split.

Split Settings for a Feature Toggle

We don’t just have the ability to change the application behavior from the console; we can also experiment (hence the name) with how often the application loads the different comic.

This is the power of feature flags

This Is Just the Beginning

This guide is intended to show you how to get started with Rollout in a Java project. Rollout’s documentation has details on how you can do a great deal more with flags, experiments, groupings, and the management console.

You now have an understanding of feature flag management and how it can improve your Java code and help you manage your projects. Get to it!

 

Photo credit: Nolan Isaac.

Creating a New Project with TeamCity

Here’s a post I wrote for the Techtown blog a while back. The original is here. Techtown also offers a course about using TeamCity for Continuous Integration. I am a big fan of Jetbrain’s products, especially this one.

 

TeamCity is a continuous integration server that supports building and deploying a variety of different project formats, including those from IntelliJ Idea and Visual Studio. Developers can verify check-ins before committing them to version control, customize build parameters and track version control and build history in one place.

In this tutorial, we’ll learn the basic of how to use TeamCity by configuring a Java project on TeamCity, building it, view unit test results, break the build with a code check-in, and then fix it again.

Let’s get to work.

Create a New Project

First, go to your TeamCity’s URL and log in. If this is your first time logging in, you will need to get a username and password from your server administrator.

Next, click Administration on the upper right-hand side of the overview page. This will take you to the administration area where you can create a new project.

TeamCity Create Project
Click the Create Project button.
TeamCity Add Project VCS
For this example, add the new project with a repository URL. Enter the URL and the authentication information for the source repository and click Proceed.

Even though TeamCity has specific support for GitHub, we are inputting a GitHub URL as if it were any Got system. We’ll cover GitHub support in a later unit.

Next, TeamCity prompts for a project and build name. Accept the defaults and click Proceed.

TeamCity will check the project out and analyze the build script to determine how to build it. TeamCity supports Maven, Ant, or Gradle for Java. It works with MSBuild or NAnt for .Net.  It can also manage Ruby projects with Rake, as well as a variety of other build tools with other languages.

This is a simple Gradle project, so after a few moments, TeamCity offers two different build steps.
TeamCity Select Build Steps
The first is to call Gradle with “clean build,” which is what we want. Click on Use Selected.

And we have a project that is ready to be run.

TeamCity Created Project

Add Project to Overview Page

Before we run the first build, let’s add the project to the overview page so it’s easier to get to it. Click on the TeamCity logo on the upper left-hand corner of the page.

This brings up an almost empty page. Click on the configure visible projects link.

 

TeamCity Add Project OverviewUse the accumulator to add TeamCity Project to the list of visible projects and then save the results. This will make the project easier to access as we work.

Click on TeamCity Project. This takes us to the project view page.

Above the project information, we see that this server has one build agent and zero builds in the build queue. Agents execute the builds, and if there aren’t enough agents to run all of the pending builds, TeamCity queues them on a first-in-first-out basis.

Let’s take a quick look at our project code too.

We have a single Java class below:

And a single test:

Run an Initial Build

Let’s start a build. Click the Run button on the upper right-hand side of the project page.
TeamCity Build Results
After a few seconds, the build completes, and we see that it has passed.  One test was successfully run.

From the build results page, we can view our test.
TeamCity Test Results
TeamCity lists the test class name and the test method(s), along with the results.

We can also click the Build Log tag for a record of the build.
TeamCity Build logs The log viewer collapses some log entries. Clicking on the + icon expands them.

Build Configuration

TeamCity build behavior is configurable. Click on Edit Configuration Settings. 

On the build configuration screen, click on Show advanced options.
TeamCity Build Config
Change the Build number format.

Build Number Format TeamCity
TeamCity defines variables that can be used to create build numbers. The default is a counter that increments with each build. Prefix the build number with Tutorial.

Let’s take a look at the Artifact paths too.

 

Artifact paths TeamCitySimilar to build numbers, TeamCity offers a mechanism for creating build output locations using variables. These variables can be used to move the build results to different paths and different file names.

Finally, click on Triggers in the left-hand side menu.

TeamCity build triggers
TeamCity is watching all branches of the Git project for changes. If we click the Add Trigger button, we see a selector to add different trigger types.

TeamCity trigger select

 

For now, watching the Git repository is all we need.

Pre-Stage a Commit

One of TeamCity’s more powerful features is the ability to pre-stage a commit. We can modify our code and test it in a build on the server without committing it to the remote repository. Let’s give it a try.

Open the Java project in IntelliJ.

First, add the TeamCity plugin in IntelliJ if it is not already installed.

 

IntelliJ TeamCity PlugingNext, go to the TeamCity menu and authenticate to the TeamCity server.

 

TeamCity plugin loginNow we can modify the code. Open the TeamCity class and change getStatus() return value to false.

Commit the change but do not push it.

Next, select Remote Run Outgoing Changes… in the TeamCity menu.

Remote run

Select the project name and click OK to run the build. After a few moments, IntelliJ will notify you that the build has failed.

The TeamCity tool window displays the test that failed

Remote Run test failed

And the test output alongside it.

Remote test output TeamCity

We see what we expect. Our one-and-only test has failed.

Go to the TeamCity server, and view the list of builds for TeamCity Project.

TeamCity Personal Build

 

The pre-staged commit, in the form of a Personal Build, is at the top of the build list. The build has put out new build number scheme; it’s called Tutorial #2.

Personal builds are only visible to the users that create them. They can be triggered as pre-staged commits, or from the TeamCity server by viewing the drop-down control under the Run button.

Breaking and Fixing a Build

Let’s ignore TeamCity and push the broken change to Git.

After a few minutes, TeamCity will pick up the change in git, and run a build.

Build running

 

The build fails.

TeamCity Build Failed

Click on the build to see details.

Failed Build Detail

 

We see the test that failed.

Click the Changes tab under the build name.

TeamCity Changes

 

This displays the commit that triggered the build.

Go back to the build detail. Click Assign Investigation in the red banner near the top.

 

We can assign responsibility for fixing the build to a user. Assign the investigation to yourself. User profiles have a list of open Investigations. TeamCity will close this investigation automatically when a build runs successfully.

Go back to the code and change the return value of getStatus back to true. Check-in and push the change.

A new build will be triggered soon.

TeamCity new build

 

And it passes.

TeamCity Build List

 

The Investigation is closed. We can view the log for this successful build and also view the changes that fixed it.

Conclusion

In this tutorial, we imported a Java project to TeamCity. We built the project, examined build and test logs, and then used TeamCity pre-staged commits to check a code change before committing it. We also broke the build and reviewed how to manage build issues with TeamCity.

TeamCity is a powerful CI system that will improve how you and your team manage builds and deployments. Get started with it today!

 

Photo credit: Randy Fath.

What You Need

So someone mentioned INXS on Twitter this week, and I spent several hours coding to their music. So whoever you were, thanks for that.

What is Mocking?

what is mocking

Here’s a post I wrote for Typemock back in August. They published it in two parts here and here. TDD has become a passion of mine, and I enjoy writing about testing and testing concepts. This post was an excuse to go back and read some posts and books from Martin Fowler and Kent Beck, which were added bonuses. Enjoy!


Mocking is a term that comes up often when we talk about unit testing. What does it mean?

The verb “mock” has several definitions, but only one relates to how we use it in software engineering:

a: to imitate (someone or something) closely: mimic
mockingbird was mocking a cardinal —Nelson Hayes

Mocking means replicating the behavior of something. We replace an object with a mock for testing purposes.

Sometimes there is debate over the differences between a mock, a stub, and a fake. We’ll cover all three, with an emphasis on mocking and how you can use it to improve how you design and test your code.

Mocking Defined

Mocking is creating an object that mimics the behavior of another object. It’s a strategy for isolating an object to test it and verify its behavior.

Definitions of mocking differ over whether a mock must verify behavior. However, Fowler calls mocks “objects pre-programmed with expectations which form a specification of the calls they are expected to receive.” Yet Pivotal says they are “self-verifying spies.” Their definitions are good enough for me.

So let’s expand our initial definition:

Mocking is creating an object that mimics another and setting it with expectations about how it will be used.

“Regular” Unit Testing

Regular unit testing verifies results. A typical example is writing tests for a calculator. A method that adds two integers can be verified by passing it 2 and 2 and checking for 4.

A calculator is unit testing nirvana. If all tests were as simple as a finite set of inputs and a computed output, verifying behavior would be simple. But even in an imaginary functional utopia, objects interact with the outside world. We need a way to keep our unit tests isolated.

Isolation

Isolation is a critical testing concept. When we test an object, it should be separated from other objects and external dependencies.

With the calculator example, this is easy. But it’s more likely that objects will interact with each other and with external dependencies. Isolation is a vital testing goal that often creates the need for mocking.

If we create an object that interacts with a database, making the database a test dependency is not acceptable. We create an object that mimics the database interface.

But we don’t only isolate from external dependencies. It’s a best practice to confine unit tests as much as possible to one object at a time. When our tests separate components, fixing new problems as they occur is easier.

Imagine moving to a new compiler version and discovering a handful of broken tests. If each test represents a single object, fixing them is simple. If the tests touch three or four objects, repairing the problems is more difficult.

Stubbing and Faking

A stub is an object that mimics another, but with predefined or “canned” responses. We use stubs for tests that rely on an interface to provide one or more calls that return void.

Canned responses can go beyond void, though. Suppose we have an object that inserts single records to a SQL database and the result of a successful insert is always “1,” indicating a single row. We can isolate this object with a simple stub.

A fake is a more sophisticated stub. It implements a subset of the behavior that a production object does, but it’s lighter weight. One example is a fake web service or an in-memory database that replaces a remote version.

A Mocking Example

We mock when we want to verify interactions with other objects while remaining isolated from them.

Let’s take the example of an object that inserts records to a database one step further. What if, depending on the input, we expect our object to produce a different insert query?

Let’s examine a test with Arrange-Act-Assert.

Arrange:

  • The input to object under test
  • The expected insert query
  • The return value from the database mock for the insert query (success)
  • An instance of the object to test
  • The database mock
  • Initialize test object with database mock

Act: Pass test input to object under test

Assert:

  • Test object passed expected insert query to database mock
  • Test object returned success

We’ve tested the object’s ability to create the query we expect based on a predefined input and to return a successful result.

The arrangements contain the mock’s expectations about the test. Depending on the framework, we check assertions at the end of the test or the mock fails immediately when an expectation is violated.

We have to initialize the test object with the mock. This may be explicit, such as in a framework that uses dependency injection, or it may be done implicitly by the test framework that injects mocks “under the covers.”

Since our object creates different queries based on its input, we need at least one more test.  We can duplicate this test and supply a different input and matching expectation. Depending on the range of different inputs and queries, there may be many tests like this.

But what happens when the database fails? We need another test.

More Than One Mock

When we initialize a mock with an expected call, we tell it how to return. For this test, we’ll set the mock to fail regardless of the input. We’ll also need a second mock.

Arrange:

  • The input to object under test
  • The expected insert query
  • The return value from database mock for the insert query (failure)
  • The expected log message
  • An instance of the object to test
  • The database mock
  • The logging mock
  • Initialize test object with mocks (if applicable)

Act: Pass test input to object under test

Assert:

  • Expected insert query was passed to database mock
  • Test object returned a failure indication
  • Test object logged error

If a log message is how we detect errors in production, it needs to be part of our test. Opening log files and parsing them is not the correct way to do that. (Don’t laugh, I’ve seen it done.) We treat the logger like any external dependency and mock it. Then we verify it received the correct message in our assertions.

Lessons From Mocking

Even with just two mocks, our test arrangements started to get lengthy.

The fewer the dependencies and interactions an object has, the easier it is to test. This is true regardless of the type of testing, but it is especially true with mocks. Each mock has to be created and primed with at least one expectation and a return value.

Obeying the Law of Demeter is always a good idea when designing objects. Carefully defining friends and avoiding “friends of friends” makes tests easier to define and read.

Using test-driven (or behavior-driven) development is another way to create loosely-coupled code. Placing tests first in the process lends itself to objects that are easy to isolate.

We’ve discussed the problems that static state and singletons can cause before. With static state, tests may suffer race conditions and other situations that lead to inconsistent test results. Singletons complicate creating mocks and ensuring that the test uses the correct instance. Fortunately, tools like Isolate solve these problems.

Mocking is a valuable testing tool that helps with test isolation.  It’s often the best option when testing objects that interact with external resources. But don’t overlook it when creating tests for objects that communicate with each other inside application boundaries.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑