Welcome!

Blog Feed Post

What is structured logging and why developers need it

Log files are one of the most valuable assets that a developer has. Usually when something goes wrong in production the first thing you hear is “send me the logs”. The goal of structured logging is to bring a more defined format and details to your logging. We have been practicing structured logging at Stackify for quite a while and want to share some of our thoughts and best practices.

What is structured logging?

The problem with log files is they are unstructured text data. This makes it hard to query them for any sort of useful information. As a developer, it would be nice to be able to filter all logs by a certain customer # or transaction #. The goal of structured logging is to solve these sorts of problems and allow additional analytics.

For log files to be machine readable more advanced functionality, they need to be written in a structured format that can easily parsed. This could be XML, JSON, or other formats. But since virtually everything these days is JSON, you are most likely to see JSON as the standard format for structured logging.

Structured logging can be used for a couple different use cases:

  1. Process log files for analytics or business intelligence – A good example of this would be processing web server access logs and doing some basic summarization and aggregates across the data.
  2. Searching log files – Being able to search and correlate log messages is very valuable to development teams during the development process and for troubleshooting production problems.

Structured logging example

A simple example will probably help to make it clear as to what structured logging really is.

Normally you might write to a log file like this:

log.Debug("Incoming metrics data");

This would produce a line like this in your log:

DEBUG 2017-01-27 16:17:58 – Incoming metrics data

Depending on your logging framework, logging some additional fields would be done like this. This give you the ability to potentially easily search on these custom fields.

log.Debug("Incoming metrics data", new {clientid=54732});

This would produce a line like this in your log, now including the extra field:

DEBUG 2017-01-27 16:17:58 – Incoming metrics data {"clientid":54732}

If you were using structured logging and sending it to a log management system, it would serialize the entire message and additional metadata as JSON. This is part of the power of using structured logs and a log management system that supports them.

[{
		"Env" : "Unknown",
		"ServerName" : "LAPTOP1",
		"AppName" : "ConsoleApplication1.vshost.exe",
		"AppLoc" : "C:\\BitBucket\\stackify-api-dotnet\\Src\\ConsoleApplication1\\bin\\Debug\\ConsoleApplication1.vshost.exe",
		"Logger" : "StackifyLib.net",
		"Platform" : ".net",
		"Msgs" : [{
				"Msg" : "Incoming metrics data",
				"data" : "{\"clientid\":54732}",
				"Thread" : "10",
				"EpochMs" : 1485555302470,
				"Level" : "DEBUG",
				"id" : "0c28701b-e4de-11e6-8936-8975598968a4"
			}
		]
	}
]

It is important to know that there is no real standard to structured logging and it can be done a lot of different ways. To get the most value out of it, you need to be using a logging framework (like log4net, log4j, etc) that supports logging additional properties and then send that data to a log management system that can accept your custom fields and index them.

How to view structured logs

If you are programming with .NET or Java, you can use Prefix to view what your code is doing via transaction tracing along with your logging. Prefix can even show you any custom properties that are being logged as JSON. Prefix is free and is the best log viewer developers can get.

prefix-error-log-full-trace-2

How we use structured logging at Stackify

At Stackify we use structured logging primarily to make it easier to search our logs. We care more about the benefits it provides for our developers.

When we look at our logs, they look like this below. You can see all the custom fields we log because they show up as JSON.

structured logging demo

This enables us to very easily search by any of those fields via our log management system.

A simple search like this: “clientidNumber:54732”, shows us only those logs, helping us quickly narrow down problems for a specific client. I can search across every app and server we have from one place.

structured logging demo filtered

TIP: Log extra fields on exceptions!

One of the best uses of structured logging is on exceptions. Trying to figure out why an exception happened is infinitely easier if you know more details about who the user was, input parameters, etc.

    try
    {
        //do something
    }
    catch (Exception ex)
    {
        log.Error("Error trying to do something", new { clientid = 54732, user = "matt" }, ex);
    }

Final thoughts on structured logging

It doesn’t really take any longer to log custom properties as you write your logging. These extra properties can provide more details that make it easier to trouble application problems. If you are using a log management system that supports searching by these custom fields, then you can also search your logs by these new properties.

If you need help with structured logging, be sure to try out Retrace which includes logging for free as a standard feature.

The post What is structured logging and why developers need it appeared first on Stackify.

Read the original blog entry...

More Stories By Stackify Blog

Stackify offers the only developers-friendly solution that fully integrates error and log management with application performance monitoring and management. Allowing you to easily isolate issues, identify what needs to be fixed quicker and focus your efforts – Support less, Code more. Stackify provides software developers, operations and support managers with an innovative cloud based solution that gives them DevOps insight and allows them to monitor, detect and resolve application issues before they affect the business to ensure a better end user experience. Start your free trial now stackify.com

Latest Stories
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Join us at Cloud Expo June 6-8 to find out how to securely connect your cloud app to any cloud or on-premises data source – without complex firewall changes. More users are demanding access to on-premises data from their cloud applications. It’s no longer a “nice-to-have” but an important differentiator that drives competitive advantages. It’s the new “must have” in the hybrid era. Users want capabilities that give them a unified view of the data to get closer to customers and grow business. The...
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
It is ironic, but perhaps not unexpected, that many organizations who want the benefits of using an Agile approach to deliver software use a waterfall approach to adopting Agile practices: they form plans, they set milestones, and they measure progress by how many teams they have engaged. Old habits die hard, but like most waterfall software projects, most waterfall-style Agile adoption efforts fail to produce the results desired. The problem is that to get the results they want, they have to ch...
IoT solutions exploit operational data generated by Internet-connected smart “things” for the purpose of gaining operational insight and producing “better outcomes” (for example, create new business models, eliminate unscheduled maintenance, etc.). The explosive proliferation of IoT solutions will result in an exponential growth in the volume of IoT data, precipitating significant Information Governance issues: who owns the IoT data, what are the rights/duties of IoT solutions adopters towards t...
Wooed by the promise of faster innovation, lower TCO, and greater agility, businesses of every shape and size have embraced the cloud at every layer of the IT stack – from apps to file sharing to infrastructure. The typical organization currently uses more than a dozen sanctioned cloud apps and will shift more than half of all workloads to the cloud by 2018. Such cloud investments have delivered measurable benefits. But they’ve also resulted in some unintended side-effects: complexity and risk. ...
With the introduction of IoT and Smart Living in every aspect of our lives, one question has become relevant: What are the security implications? To answer this, first we have to look and explore the security models of the technologies that IoT is founded upon. In his session at @ThingsExpo, Nevi Kaja, a Research Engineer at Ford Motor Company, discussed some of the security challenges of the IoT infrastructure and related how these aspects impact Smart Living. The material was delivered interac...
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities. In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, posited that disruption is inevitable for comp...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, Doug Vanderweide, an instructor at Linux Academy, discussed why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers wit...
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
No hype cycles or predictions of zillions of things here. IoT is big. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, Associate Partner at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He discussed the evaluation of communication standards and IoT messaging protocols, data analytics considerations, edge-to-cloud tec...
When growing capacity and power in the data center, the architectural trade-offs between server scale-up vs. scale-out continue to be debated. Both approaches are valid: scale-out adds multiple, smaller servers running in a distributed computing model, while scale-up adds fewer, more powerful servers that are capable of running larger workloads. It’s worth noting that there are additional, unique advantages that scale-up architectures offer. One big advantage is large memory and compute capacity...
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists examined how DevOps helps to meet the de...
Cloud applications are seeing a deluge of requests to support the exploding advanced analytics market. “Open analytics” is the emerging strategy to deliver that data through an open data access layer, in the cloud, to be directly consumed by external analytics tools and popular programming languages. An increasing number of data engineers and data scientists use a variety of platforms and advanced analytics languages such as SAS, R, Python and Java, as well as frameworks such as Hadoop and Spark...