Blog Feed Post

How to Prevent Alerting Overload

In our always-on, IoT-enabled, cloud-connected, big data age, we face a major paradox: it’s now easier than ever to collect large amounts of data — yet the more data we collect, the harder it becomes to monitor situations effectively.

This problem is similar to what psychologists call “information overload” — the phenomenon that causes someone to fail to make decisions effectively because he has too much information to contend with.

In some contexts information overload is unavoidable. If you get hundreds of emails each day, there may not be much you can do about feeling overwhelmed by them, as you don’t necessarily have a lot of control over who sends you an email. Yet, when it comes to data center infrastructure, information overload is not inevitable. It’s entirely up to you to decide how much and what types of data to collect. If you find that you have too much data to parse feasibly, it means you need to rethink your monitoring practices and alert filtering.

Of course, as we’ve already noted, many admins may find themselves fighting an uphill battle when it comes to preventing information overload in the data center. That’s because the explosion of the cloud and the advent of IoT — and all of the inexpensive data that comes alongside those trends — have made it easier than ever to collect all manners of information about your servers and applications.

What’s Critical, What’s Not

That’s why it’s now more important than ever to decide which types of monitoring you actually need, what to set up notifications on, and what you can do without. Just because adding more monitoring to your infrastructure is easy and inexpensive doesn’t mean you should necessarily do it.

If you add monitoring blindly, you’re shooting yourself in the foot by collecting more data than you can ever process or act on effectively. This turns into fatigue for your on-call staff, wasted time spent on low priority issues, and causes low priority issues to distract from the critical ones.

Successful alert management depends on your particular needs, of course. There’s no one-size-fits-all approach. In general, it’s a good idea to try to restrict yourself to deploying sensors that center around the following types of information:

  • Security incidents: You’ll want to be alerted to things like repeated failed login attempts or port scans so you can stay ahead of threats.
  • Host failure: If a physical or virtual server fails to start, or crashes suddenly, that’s an important event to know about.
  • Resource exhaustion: You don’t want to wait until you run out of data storage or network bandwidth to find out that you should be adding more. Use sensors to warn you when usage starts to approach the maximum available and stays at that level for more than a short time.

Again, your mileage may well vary. But the above list provides the core essential types of events you should be notified on.

Monitoring vs. Alarms

There are other types of data that are good to monitor but may not require an alarm. Those include things like:

  • CPU usage: This can vary widely throughout the day due to a number of factors. You want to know about general trends, but you don’t need an alarm to tell you each time CPU usage has jumped.
  • Network load: This is in the same category as CPU usage. Network load varies naturally. You should know your data center’s trends so you can plan for long-term expansion. But there’s no need to set off alarms just because a lot of devices happen to be on the network in a given moment — unless, of course, the situation is extreme and sustained.
  • Environmental conditions: You should track things like data center temperature. But this is the type of incident that can usually be handled in an automated fashion. Instead of having sensors send you an alert when temps climb high, have software that turns up the cooling units for you. You only need an alert if temperatures approach critical level and stay there.

It’s quite possible that an issue triggered by a sensor like processor queue length can easily be covered indirectly with the more relevant data point such as processor utilization.

The Right Data for the Right People

Another way to make sure you’re getting optimal results from your sensors is to make sure the right incident notifications are going to the right people.

Platforms like PagerDuty let you specify an order of command for handling different types of events. Rather than blanketing your whole team with incident notifications, make sure only the exact right people who need to handle issues get woken up. This minimizes unplanned work and alert fatigue in responding to issues.

You can also configure PagerDuty to send notifications to a larger group if the initial recipients don’t respond in a certain amount of time.

Get More Out of Logs

Last but not least, keep in mind that there are lots of different ways to deal with information. One way is to generate alerts. But another is to use log analytics tools to identify trends that stretch across a large amount of data collected by various monitoring tools.

By boiling your log results down to the essentials, you can figure out what you should be paying attention to without having to handle a huge number of events on an individual basis.

That’s why PagerDuty offers features like integrations with Splunk and other analysis tools. These are ideal for providing a way to derive value from monitoring data without suffering information overload. 

The post How to Prevent Alerting Overload appeared first on PagerDuty.

Read the original blog entry...

More Stories By PagerDuty Blog

PagerDuty’s operations performance platform helps companies increase reliability. By connecting people, systems and data in a single view, PagerDuty delivers visibility and actionable intelligence across global operations for effective incident resolution management. PagerDuty has over 100 platform partners, and is trusted by Fortune 500 companies and startups alike, including Microsoft, National Instruments, Electronic Arts, Adobe, Rackspace, Etsy, Square and Github.

Latest Stories
"We were founded in 2003 and the way we were founded was about good backup and good disaster recovery for our clients, and for the last 20 years we've been pretty consistent with that," noted Marc Malafronte, Territory Manager at StorageCraft, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
FinTechs use the cloud to operate at the speed and scale of digital financial activity, but are often hindered by the complexity of managing security and compliance in the cloud. In his session at 20th Cloud Expo, Sesh Murthy, co-founder and CTO of Cloud Raxak, showed how proactive and automated cloud security enables FinTechs to leverage the cloud to achieve their business goals. Through business-driven cloud security, FinTechs can speed time-to-market, diminish risk and costs, maintain continu...
"DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
What sort of WebRTC based applications can we expect to see over the next year and beyond? One way to predict development trends is to see what sorts of applications startups are building. In his session at @ThingsExpo, Arin Sime, founder of WebRTC.ventures, discussed the current and likely future trends in WebRTC application development based on real requests for custom applications from real customers, as well as other public sources of information.
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
As businesses adopt functionalities in cloud computing, it’s imperative that IT operations consistently ensure cloud systems work correctly – all of the time, and to their best capabilities. In his session at @BigDataExpo, Bernd Harzog, CEO and founder of OpsDataStore, presented an industry answer to the common question, “Are you running IT operations as efficiently and as cost effectively as you need to?” He then expounded on the industry issues he frequently came up against as an analyst, and ...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
While the focus and objectives of IoT initiatives are many and diverse, they all share a few common attributes, and one of those is the network. Commonly, that network includes the Internet, over which there isn't any real control for performance and availability. Or is there? The current state of the art for Big Data analytics, as applied to network telemetry, offers new opportunities for improving and assuring operational integrity. In his session at @ThingsExpo, Jim Frey, Vice President of S...
"DX encompasses the continuing technology revolution, and is addressing society's most important issues throughout the entire $78 trillion 21st-century global economy," said Roger Strukhoff, Conference Chair. "DX World Expo has organized these issues along 10 tracks with more than 150 of the world's top speakers coming to Istanbul to help change the world."
"We focus on SAP workloads because they are among the most powerful but somewhat challenging workloads out there to take into public cloud," explained Swen Conrad, CEO of Ocean9, Inc., in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We provide IoT solutions. We provide the most compatible solutions for many applications. Our solutions are industry agnostic and also protocol agnostic," explained Richard Han, Head of Sales and Marketing and Engineering at Systena America, in this SYS-CON.tv interview at @ThingsExpo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We are focused on SAP running in the clouds, to make this super easy because we believe in the tremendous value of those powerful worlds - SAP and the cloud," explained Frank Stienhans, CTO of Ocean9, Inc., in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...