Blog Feed Post

OpenStack monitoring beyond the Elastic Stack – Part 3: Monitoring with Dynatrace

This post is the third and final installment in our OpenStack monitoring series. Part 1 explores the state of OpenStack and some of its key terms, Part 2 is about the OpenStack monitoring space and how open source tools like the Elastic Stack (ELK Stack) compare to Dynatrace.

In this last part let’s take a closer look at how Dynatrace monitors OpenStack.

Since we started our journey with OpenStack, we had a lot of interesting conversations with OpenStack cloud users. As a general conclusion, we learned that the most important metrics and capabilities they are looking for include:

  • OpenStack service performance
  • Service availability
  • Resource utilization metrics
  • Log monitoring

However, what we also learned is that OpenStack is a different kind of beast: due to its elusive nature, problems with one OpenStack service can manifest themselves as performance issues within other services.

Take this example: an OpenStack admin notices an issue when launching a new VM or attaching a Cinder volume. His first thought might be to look into the log files of Nova and Cinder services. After combing through hundreds of megabytes of log data, he might learn however that the root cause of the issue resides within different OpenStack services, or supporting technologies like load balancers (HAproxy), message brokers (RabbitMQ), and databases (MySQL).

That’s why it’s so important to look at your OpenStack environment holistically, as opposed to the single monitoring use cases that traditional monitoring tools provide. You need to cover:

  • OpenStack service performance
  • Service availability
  • Supporting technologies: HAproxy, Rabbit MQ, MySQL
  • Resource utilization metrics
  • Log analysis
  • APM
  • Problem alerting with root cause analysis

Get a perfect overview of OpenStack and everything running on it in six easy steps with Dynatrace

1. Install a single agent

To start monitoring your OpenStack components the only thing you need to do is install the Dynatrace agent on all controller nodes that run OpenStack API services. Once it’s done, you can easily add the dedicated OpenStack monitoring tile to your Dynatrace dashboard.

But there is another important thing happening upon installation: with zero configuration, Dynatrace application mapping auto-detects and creates an interactive visualization of your entire application topology from your OpenStack cloud components up to the application front end.

This is the perfect starting point for you to drill down into your OpenStack data plane and see what’s going on.

2. Analyze your OpenStack compute nodes

In the Compute view you get a general overview of your controller and compute nodes, your Cinder volumes, Neutron subnets and your Swift objects. But keep scrolling because more valuable insights are coming.

The Environment dynamics section tracks how the number of running virtual machines evolves over time. An increasing trend may indicate the need for capacity adjustments. Crucial details regarding the number of VMs that have been spawned and their average launch times is also included. If you notice launch times going up, you may want to investigate the reasons why.

The Events section lets you know on which compute node each VM is launched and stopped.

The Compute section shows you how well your compute nodes are performing, which virtual machines are currently running on those nodes, and how the VMs contribute to overall resource usage.

You can slice and dice your OpenStack monitoring data with filters—compute nodes and virtual machines can be filtered based on region, security group name, compute node name, availability zone, and more. Such filtering is particularly useful for tracking down elusive performance issues within large environments.

3. Gain insights into your OpenStack controller node

If you switch to the Controller section, you get a complete overview of your OpenStack services (Keystone, Glance, Cinder, Neutron and others) and their basic performance metrics like CPU, Memory usage, Connectivity.

From here you can select the service you are interested in and drill down into it on a process page to find out more about its performance. Here, Dynatrace provides:

  • OpenStack service availability
  • Service performance
  • Connectivity
  • Process-specific metrics
  • …and direct access to the log files of all OpenStack services

4. Keep an eye on supporting technologies

The technologies deployed alongside OpenStack — load balancers, message brokers and databases — are often potential problem areas about which OpenStack admins need to be aware. Take this RabbitMQ connectivity problem for example.

Thanks to the additional RabbitMQ counters provided by Dynatrace we can easily find the root cause.

On the Further details section of the RabbitMQ process page we can see that this process was launched with a default file descriptor limit. Once this limit was exceeded, RabbitMQ stopped accepting new connections. This resulted in a connectivity problem.

5. See the overall health of your applications running on OpenStack

In the previous steps we’ve seen how Dynatrace deals with infrastructure level components, like compute nodes and OpenStack services. But if that’s all a monitoring tool gives you, be sure you see only a part of the big picture.

To get the most out of your OpenStack monitoring, you need a way to correlate what’s happening in OpenStack with what’s happening in the rest of your application environment.

Besides providing insights into your OpenStack control plane, Dynatrace also gives deep visibility into the applications you run in your private cloud. By automatically correlating OpenStack events to real user and business metrics, you get an unparalleled insight into your digital business.

Take the example below: this problem notification lets us know that in one of our web applications running on OpenStack the user action duration has seriously degraded.

A-ha, so that’s why there were no conversions in the last two hours.

But why?

6. Understand the causes of failing services

If your daily activity involves monitoring, I’m quite sure one of your favorite questions is “but why”. This is where Dynatrace’s automated root cause analysis can come in handy.

While manually hunting down performance problems in highly distributed OpenStack environments can be a time-consuming (if not impossible) process, Dynatrace makes it possible to automatically pinpoint application and infrastructure issues in seconds using artificial intelligence.

By examining billions of dependencies, Dynatrace problem detection goes beyond correlation and gives you causation. Thus, in the example above it identified that the actual cause of the problem was a CPU saturation on the OpenStack-Business-Backend host. Nice, from here we can start remediating the issue.

So, who will make sure that application performance stays high?

While Gartner called it a “science project” in 2015, in 2017 451 Research Group estimates that:

OpenStack’s ecosystem will grow nearly five-fold in revenue, from US$1.27 billion market size in 2015 to US$5.75 billion by 2020.

It’s not yet seven and OpenStack is really going to eat the world: it already started to turn the e-commerce business upside down. Becoming more mature, OpenStack environments also need app-centric monitoring that is mature enough to handle their complexity.

Open source monitoring tools like the Elastic Stack (ELK Stack) are all being strong in their specific areas. Before choosing anything however, consider what do you need to monitor. It could be only a few things or it could be everything. And then choose the tool that will make your monitoring life easier.

In the first part of this blog series we took a look at the state of OpenStack. Then we made a short journey in the current monitoring space available for OpenStack to see how tools like the ELK Stack compare to Dynatrace. Finally, I attempted to present the Dynatrace way of monitoring OpenStack by showing its specialties –  Full stack power, AI-power, and Automation power. Even though these might sound like marketing buzzwords for some, at the moment there is no other monitoring tool capable to see the big picture, understand data in context, and do this without any manual intervention.

If you would like to see Dynatrace at work, take our 15-day free trial or reach out to [email protected]

This series will end now, but stay tuned because soon my colleague Dirk Wallerstorfer is going to unveil some great insights about Dynatrace’s OpenStack monitoring.

The post OpenStack monitoring beyond the Elastic Stack – Part 3: Monitoring with Dynatrace appeared first on Dynatrace blog – monitoring redefined.

Read the original blog entry...

More Stories By Dynatrace Blog

Building a revolutionary approach to software performance monitoring takes an extraordinary team. With decades of combined experience and an impressive history of disruptive innovation, that’s exactly what we ruxit has.

Get to know ruxit, and get to know the future of data analytics.

Latest Stories
"I think DevOps is now a rambunctious teenager – it’s starting to get a mind of its own, wanting to get its own things but it still needs some adult supervision," explained Thomas Hooker, VP of marketing at CollabNet, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We are still a relatively small software house and we are focusing on certain industries like FinTech, med tech, energy and utilities. We help our customers with their digital transformation," noted Piotr Stawinski, Founder and CEO of EARP Integration, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We've been engaging with a lot of customers including Panasonic, we've been involved with Cisco and now we're working with the U.S. government - the Department of Homeland Security," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We're here to tell the world about our cloud-scale infrastructure that we have at Juniper combined with the world-class security that we put into the cloud," explained Lisa Guess, VP of Systems Engineering at Juniper Networks, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
"Peak 10 is a hybrid infrastructure provider across the nation. We are in the thick of things when it comes to hybrid IT," explained Michael Fuhrman, Chief Technology Officer at Peak 10, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
As enterprise cloud becomes the norm, businesses and government programs must address compounded regulatory compliance related to data privacy and information protection. The most recent, Controlled Unclassified Information and the EU’s GDPR have board level implications and companies still struggle with demonstrating due diligence. Developers and DevOps leaders, as part of the pre-planning process and the associated supply chain, could benefit from updating their code libraries and design by in...
SYS-CON Events announced today that Calligo, an innovative cloud service provider offering mid-sized companies the highest levels of data privacy and security, has been named "Bronze Sponsor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Calligo offers unparalleled application performance guarantees, commercial flexibility and a personalised support service from its globally located cloud plat...
"We are an IT services solution provider and we sell software to support those solutions. Our focus and key areas are around security, enterprise monitoring, and continuous delivery optimization," noted John Balsavage, President of A&I Solutions, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We were founded in 2003 and the way we were founded was about good backup and good disaster recovery for our clients, and for the last 20 years we've been pretty consistent with that," noted Marc Malafronte, Territory Manager at StorageCraft, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
"We are focused on SAP running in the clouds, to make this super easy because we believe in the tremendous value of those powerful worlds - SAP and the cloud," explained Frank Stienhans, CTO of Ocean9, Inc., in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.