Welcome!

Blog Feed Post

Focus on Detection:Prometheus, and the case for time series analysis

Detection, in the Incident Lifecycle, is the observation of a metric, at certain intervals, and the comparison of that observation against an expected value. Monitoring systems then trigger notifications and alerts based on the observation of those metrics.

For many teams, on-call is primarily about detection. Monitor everything and make sure we don’t miss out! In organizations with legacy monitoring configurations, getting better at Detection is tough. Environments are configured with broadly applied, arbitrarily set thresholds. Sometimes this is due to limitations in the monitoring solution, or the complexity of implementing different thresholds for different measurements. Sometimes it’s a simple reflection that Detection was not an area of focus before. Whatever the reason, the impact on on-call teams is measurable: too many false alerts, too many interruptions, and acute alert fatigue.

Teams that measure high on the Incident Management Assessment are focusing on time series analysis in their monitoring and detection systems. Prometheus, as one popular example of a solution using a time series database, is seeing wide adoption in both new projects and within existing environments.

Getting your head around time series can seem daunting. For individuals with years between themselves and their last statistics class, understanding the myriad of options is a barrier. While advanced functionality in these systems requires some thinking, there are plenty of easy use cases to explore in making the case for this type of Detection.

Cleaning up the disk: a stable of system administration for 60 years

I have no data to support this assertion, but I suspect if I bucketed all the alerts I’ve received in my career by type, disk utilization would be the winner in a landslide. It’s the easiest system metric to understand, but can have wide reaching consequences if in an unhappy state. It’s hard to imagine an environment where volume utilization is not monitored by default on every host. While “disk full” seems like a simple discussion, unboxing it reveals the complexity that every team faces when considering detection methods.

If we can all agree that full disks are bad, we can still have a lively debate on which precursors to FULL we may want to detect and alert on. At what threshold should a team member become involved? The number of TB free? GB? MB? A percentage of total disk? What if this host is part of a fleet of servers, and losing it is not significant?

A standard approach here is to send a WARNING level alert at 85% used (15% free) and a CRITICAL at 90% used. The thinking being that with only 10% of the volume free, someone should do something! Why? If it took us 3 years to eat up that 90%, is there any reason to believe we’ll chew through the remaining terabyte in the next 10 minutes?

The reference system

Let’s map this discussion to a basic system we can all imagine: 4 core, 4GB RAM, and two volumes, 2GB and 10GB (operating system and application, respectively). For this example it doesn’t really matter if this is a container, a physical host, or a cloud instance. I’m using Prometheus and the Node Exporter to gather and expose metrics, with Grafana on top for the visualizations.

The trickle

One would expect volume utilization on the OS volume to be relatively steady state. Other than logs, little change is introduced here. Depending on the application, the 10GB volume may also be pretty flat, or it may get a lot of use. Here we’ll consider a steady, if small, increase of utilization on that volume. As you can see below, the volume is just creeping its way up to full.

https://victorops.com/wp-content/uploads/2017/06/Screen-Shot-2017-06-19-... 300w, https://victorops.com/wp-content/uploads/2017/06/Screen-Shot-2017-06-19-... 768w, https://victorops.com/wp-content/uploads/2017/06/Screen-Shot-2017-06-19-... 510w" sizes="(max-width: 994px) 100vw, 994px" />

The standard approach would send a WARNING out right here–we’ve hit that generic 85% utilization threshold. What should someone actually do about it though?

Predictive analytics

Using time series data, we can start to apply something approximating prediction to our detection efforts. Using the same data above, we can compute the time to disk full, given the current rate of change with the deriv function in Prometheus:

(node_filesystem_size – node_filesystem_free) / deriv(node_filesystem_free[3d]) > 0

https://victorops.com/wp-content/uploads/2017/06/Screen-Shot-2017-06-19-... 300w, https://victorops.com/wp-content/uploads/2017/06/Screen-Shot-2017-06-19-... 768w, https://victorops.com/wp-content/uploads/2017/06/Screen-Shot-2017-06-19-... 510w" sizes="(max-width: 970px) 100vw, 970px" />

At the current rate of consumption we have 24 weeks to action this condition. Probably OK to not fire an alert just yet. This isn’t even informational; no real change is detected in the state of the system.

The flood

Let’s consider a different scenario– – the same volume, but with far more available space. Starting with about 25% utilization, we see this volume has a relatively steady rate of consumption

https://victorops.com/wp-content/uploads/2017/06/Screen-Shot-2017-06-21-... 300w, https://victorops.com/wp-content/uploads/2017/06/Screen-Shot-2017-06-21-... 510w" sizes="(max-width: 595px) 100vw, 595px" />

Until, something changes:

https://victorops.com/wp-content/uploads/2017/06/Screen-Shot-2017-06-21-... 300w, https://victorops.com/wp-content/uploads/2017/06/Screen-Shot-2017-06-21-... 510w" sizes="(max-width: 587px) 100vw, 587px" />

Given the historical data, it is very unexpected for this volume to see that kind of spike in utilization. If we focus on rate of change over time, we see the full story:

https://victorops.com/wp-content/uploads/2017/06/Screen-Shot-2017-06-21-... 300w, https://victorops.com/wp-content/uploads/2017/06/Screen-Shot-2017-06-21-... 510w" sizes="(max-width: 601px) 100vw, 601px" />

The standard threshold of 85% utilization will not be triggered… and so a team remains blind to the fact that the rate of change just exceeded historical expectations.

Is that actionable? Perhaps, perhaps not, but it is certainly more significant than the trickle scenario, which is firing alerts, interrupting teams, with no real situation requiring investigation.

All the data

This is a simple example focusing on a single detectable metric. How does this kind of approach scale to all the actual metrics your team may wish to track? Really, really well as it turns out. The default behavior of Prometheus exporters is to expose everything – and I really mean everything – for gathering by the prometheus collector. Out of the box the Prometheus Node Exporter is tracking ~620 discrete measurements on my test linux instance.

This is where these systems really differentiate themselves from the previous generation of detection systems: they default to gathering all the metrics, and alerting on none of them. This is in stark contrast to the default behavior of, say, Nagios: gather few measurements, store none, and alert on all.

Actionable Intelligence

Prometheus, and other time series database systems, bring a new kind of insight to the Detection phase of the Incident Lifecycle. They empower teams with more observable data than ever before, without hampering a team’s ability to dig in and understand any one of those metrics. With advanced grouping features, teams can understand these metrics as they relate to different classes of system or application. 

Using time series analysis, a team can completely rewrite the way Detection works in their practice. Bringing better fidelity to the measurements, and more reliably actionable alerting when necessary. This can materially change the game for anyone trying to reduce MTTR and get more sleep.

____________

VictorOps integrates with Prometheus. Check out the integration guide to get started..

 

 

The post Focus on Detection:
Prometheus, and the case for time series analysis
appeared first on VictorOps.

Read the original blog entry...

More Stories By VictorOps Blog

VictorOps is making on-call suck less with the only collaborative alert management platform on the market.

With easy on-call scheduling management, a real-time incident timeline that gives you contextual relevance around your alerts and powerful reporting features that make post-mortems more effective, VictorOps helps your IT/DevOps team solve problems faster.

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...