Blog Feed Post

From Scala Unified Logging to Full System ObservabilityPart 1 of 3: Our Original State of Logging

Jonathan is a platform engineer at VictorOps, responsible for system scalability and performance. This is Part 1 in a 3-part series on system visibility, the detection part of incident management.

These days, with infrastructures spanning tens, hundreds, even thousands of running instances, piping a log file into less is no longer an acceptable means of log research and debugging. Instead, sending them to a log aggregation service with products like Sumo, Elastic, or Splunk is commonplace because searchability is king.

Unfortunately, the pursuit of searchability can lead to undesirable side effects like unreadable, inconsistent, and just plain ugly log statements. It invades our codebases with even more custom formatting (on top of string interpolation, etc) that’s not only distracting but also hard to do well with anything less than super-human string-formatting skills. In short, the side effect on log statements can be detrimental.

Logging is just the starting point

First off, let’s make sure to bring a little context to this pursuit. At VictorOps, we use logging as a research and debugging tool. However, logging isn’t and shouldn’t provide the primary heartbeat on your systems. That’s when metrics comes into the picture, which we’ll discuss in Part 3 of this series. Before going there, let’s talk about where we started with logging at VictorOps. We needed some major improvements in this first line of troubleshooting for when things go bad.

As Dave Hahn, a senior SRE from Netflix, recently shared with us, “Be willing to have a problem before you solve it.” In line with that advice, we recently identified multiple problems to solve relating to the research and debugging done through our logs. To top it all off, when I noticed that our logging interfaces were not unified, it became clear that it was time to make both our logging interface and log output great again.

I hope that our experience at VictorOps will give you ideas on how to improve logging at your organization.

The current state: how we use logs at VictorOps

Sumo Logic is our logging platform and we use it heavily throughout the development lifecycle.

There are four primary ways we use logs:

  1. To get visibility on what’s going on during releases. Through logs, we can see if there are errors that persist after a release. If so, there is probably a hole in our altering – some problem that we aren’t yet monitoring.
  2. To create VictorOps incidents for relevant alerts. When we know that a particular log statement is indicating a problem where someone needs to get involved, we hook a scheduled Sumo search up to the VictorOps platform to create an incident out of it. The goal for most of these alerts is to migrate them to a metric based alert instead of basing it on a log statement – more on that in our metrics discussions in Part 3 of this series.
  3. To see how something is working in production. We might want to see how some new feature is working in production, so we’ll review the log statements. The production environment is always the most valuable for feedback because that’s where real customers have real accounts, alerts, users, and escalation policies. It may look fine in staging, but if there is a use case we didn’t test for that shows up in production, you can see the details in a log.
  4. To investigate high-dimensionality information. Organization, user, and API key (and for that matter, any sort of UUID) are all great examples of metadata that typically won’t be available in a metric and thus logging (or eventing) is where we’ll find that data.

We had three main players in our logging

We have a Scala backend that used three different logging frameworks. Some code used the SLF4J logging framework. SLF4J is widely used and provides a rich feature set. Other code within Akka actors used the Akka actor logging, which has a scaled down interface and feature set and is configured to use SLF4J. Some of our Play code used Play’s own logging, which is extremely simplistic, and is also configured to use SLF4J. All of these were configured with the SLF4J native Logback implementation. Here are some details:


SLF4J is likely the most widely used java logging facade with multiple implementations and a massive user base. Performance is dependent solely on your configuration of the appender that you’re using. By default, logback uses a synchronous appender, but you can easily configure an asynchronous appender. A synchronous appender will use the calling thread to actually write the log statement to file/network, whereas an asynchronous appender lightens the processing load on the calling thread by simply handing over the log statement to the appender to write to file/network at some point in the future.

Akka logging

Akka’s actor-based logging is event driven and is easily configured to use SLF4J. In the actor itself, you say log.info(“this message”), and behind the scenes, it sends an event to the system’s event stream and it’s done. This takes up almost no overhead to create that log statement because it goes somewhere else to be written.

Play logging

Play has its own simplistic logger that’s much more stripped down than the Akka logger and by default uses SLF4J. Play offers up to two arguments: the string that you’re logging, and an optional exception. The most recent version (2.6.x) has added support for SLF4J markers.

Why change how we do logging?

Strategy concerns

These concerns have to do with the various strategies taken by these different logging interfaces.

  • Call-site performance: All SLF4J interfaces rely on the caller to provide pre-computed strings and arguments prior to checking if that log level (info, debug, trace, etc.) is enabled. There are simple ways around this, like Play’s interface that uses a by-name argument for the string. This essentially creates an anonymous function that is executed only after the log level has been checked. For example, without by-name arguments, the statement below will require the mkString method to execute on a potentially large collection prior to the info method checking whether or not info level log statements are enabled. log.info(s“Team $team has users: ${users.mkString(separator)}”)
  • Conflicting interfaces: The largest effect of conflicting interfaces is developer confusion and frustration. The next problem is that it leads to incorrect log statements. If logs are to save you when things go awry, then an incorrect log statement is like a carabiner with a broken arm–looks like a useful thing but is completely useless for the intended user. For example, below are the error methods from these three interfaces. Notice how the location of the Throwable argument changes? Now, imagine working in a codebase where all three of these interfaces are being used. A little scary.
    • SLF4J: void error(String msg, Throwable t)
    • Akka: def error(cause: Throwable, message: String): Unit
    • Play: def error(message: ⇒ String, error: ⇒ Throwable)
  • Appender performance: All three of these have configurable backends and appenders, but it is worth noting that any interface you use will need to have its configuration examined. Most default appenders are synchronous and therefore write the log statement to file (or whatever destination) at the call-site. However, this can be changed easily by configuring an asynchronous appender. This clearly improves call-site performance by requiring only the string to be built before asynchronously handing it off to the appender, which will write the statement to file on its own time.

Developer concerns

How did using multiple logging libraries affect the developers?

  • Too many decisions: Choosing between three different loggers for any given class.
  • Conflicting interfaces: From a developer perspective, this causes confusion and requires you to pay more attention to your logger than you really should.
  • Inconsistency: Having more than one logger in a class, which is clearly unnecessary, and having different types inconsistently named, e.g. log and logger.

Functionality needs

What functionality do the developers need for a maintainable codebase and effective log portfolio?

  • Unified interface: A single interface allows you to add new features in one place and enables the power of easily refactoring logging on a large scale.
  • Support for log variables: Extracting specific information from a log statement is easier if it’s been given special formatting. Once standardized, this can be utilized in our Sumo queries.
  • Implicit loggers for utility classes: Utility classes lack their own identity in terms of data flow. Implicitly passing in a logger, which has identifying information from the caller (its class and log variables), provides rich log statements within utility code.
  • Further consistency: This equates to icing on the cake. Things like a very simple Logging trait to standardize the log field name, logger name (used when writing the log statements), and the logger identity (based on log variables).

Up next

Now that we’ve set the stage, in Part 2, we’ll explore how we addressed these concerns in order to make logging great again.

The post From Scala Unified Logging to Full System Observability
Part 1 of 3: Our Original State of Logging
appeared first on VictorOps.

Read the original blog entry...

More Stories By VictorOps Blog

VictorOps is making on-call suck less with the only collaborative alert management platform on the market.

With easy on-call scheduling management, a real-time incident timeline that gives you contextual relevance around your alerts and powerful reporting features that make post-mortems more effective, VictorOps helps your IT/DevOps team solve problems faster.

Latest Stories
@DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises - and delivering real results.
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
"We started a Master of Science in business analytics - that's the hot topic. We serve the business community around San Francisco so we educate the working professionals and this is where they all want to be," explained Judy Lee, Associate Professor and Department Chair at Golden Gate University, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
DXWorldEXPO LLC announced today that Dez Blanchfield joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Dez is a strategic leader in business and digital transformation with 25 years of experience in the IT and telecommunications industries developing strategies and implementing business initiatives. He has a breadth of expertise spanning technologies such as cloud computing, big data and analytics, cognitive computing, m...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
DXWorldEXPO LLC announced today that Kevin Jackson joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Kevin L. Jackson is a globally recognized cloud computing expert and Founder/Author of the award winning "Cloud Musings" blog. Mr. Jackson has also been recognized as a "Top 100 Cybersecurity Influencer and Brand" by Onalytica (2015), a Huffington Post "Top 100 Cloud Computing Experts on Twitter" (2013) and a "Top 50 C...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve fu...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.