Welcome!

Blog Feed Post

Six mistakes in your OpenStack monitoring process…and how to fix them

Many cloud admins unwillingly sabotage their OpenStack monitoring processes. I collected a few common mistakes and what to do instead to make your OpenStack troubleshooting easier.

Mistake #1: Investing a lot of time and effort into configuration

Let’s get this straight: most commercial and open source monitoring tools avoid talking about how much effort you must invest in their configuration, let alone in their maintenance. So first try to find out if you’ll have to modify files. Set permissions. Run command lines for every OpenStack component. Then consider the size of your environment. If you run a large-scale, hyper-dynamic environment and your monitoring tools require a lot of effort to set up, soon you’ll find yourself hiring more and more staff only for administering your toolset.

Quick fix

Given the fact that Dynatrace works with one agent, out-of-the box, and requires zero configuration, setting it up for monitoring OpenStack is easy. Features like the automatic integration with all common deployment automation mechanisms, or the auto-discovery of the OpenStack cloud components enable you to see performance metrics within minutes.

Mistake #2: Using too many different tools for different monitoring use cases

As an OpenStack user, you might be interested in resource utilization metrics, how your OpenStack services perform, their availability, and of course you want to see the log files.

But because there is a shortage of real all-in-one OpenStack monitoring tools, most companies implement a separate tool for each of these use cases. However, while running different monitoring tools for different silos, they quickly realize that they are unable to identify the root cause of a performance issue. Or, to find the team responsible for fixing it.

Quick fix

In contrast to conventional monitoring tools, which typically cover only a single monitoring domain, Dynatrace provides a unified monitoring solution. It gives insights into resource utilization, OpenStack services, service availability and log files on a single dashboard.

Mistake #3: Overloading yourself with too many problem alerts

Alert overload is one of the biggest time wasters for modern businesses — this is what we see at companies that implement countless monitoring tools to look at data centers, hosts, processes and services. When any of these components fail or slow down, it can trigger a chain reaction of hundreds of other failures, leaving IT teams drowning in a sea of alerts. APM solutions with a traditional alerting approach provide you with countless metrics and charts, but then it’s up to you to correlate those metrics to determine what is really happening.

Quick fix:

Go beyond correlation and get causation. Dynatrace gives you the answer to an end user-impacting issue, not just a bunch of alerts.

How do we do it?

First, we automatically discover, map and monitor all the dependencies from the user click to the back-end services, code, database and infrastructure. Second, we apply artificial intelligence to analyze the data. We examine billions of dependencies to identify the actual cause of the problem. This is key because application environments are quickly reaching a tipping point of complexity, where it is impossible for a human being to effectively analyze the data.

Mistake #4: Relying on averages and transaction samples to determine normal performance

Correctly setting up alert thresholds is crucial to effective application performance monitoring. But that can involve a lot of time-consuming and potentially error-prone manual effort with traditional APM tools—especially because most of them rely on averages and transaction samples to determine normal performance. Averages are ineffective because they are too simplistic and one-dimensional. They mask underlying issues by “flattening” performance spikes and dips. Sampling lets performance issues slip through the cracks—creating false negatives. This is especially problematic in modern hyper-dynamic cloud- and microservice-based environments.

Quick fix

The far more accurate and more useful approach is to use percentiles based on 100% gap-free data, like Dynatrace does. Looking at percentiles (median and slowest 10%) tells you what’s really going on: how most users are actually experiencing your application.

Use artificial intelligence to pin down all the baseline metrics related to the performance of your applications, services, and infrastructure — from back-end through user experience at the browser level. With AI, outliers don’t skew baseline thresholds — so you don’t get false positives. 100% gap-free full-stack data means you catch every single degradation, even those that materialize rapidly in ultra-dynamic environments — no false negatives. Such intelligent and automatic baselining allows Dynatrace to detect anomalies at a highly granular level and to notify you of problems in real time.

Mistake #5: Picking monitoring tools that are unable to scale with your business

You can keep deploying more and more monitoring tools for each silo to ensure the system limits are not reached, but this quickly becomes very hard to maintain and can add a lot of extra cost in terms of both licensing and hardware. Soon questions like these will come up:

  • How far will this scale?
  • How long until I‘ll need a newer, faster, or bigger one?

Modern application environments based on OpenStack run thousands of nodes with multiple hypervisor technologies, distributed across data centers around the globe. Managing a bunch of monitoring solutions used to be nearly impossible at this scale. Therefore, one of the key challenges for modern app-based businesses is the scalability of their IT monitoring.

Quick fix

Dynatrace was built with the world’s largest application environments in mind and scales to any size. We defined an approach to ensure performance and scalability over the application lifecycle — from development to production. We work with our customers to make performance management part of their software processes going beyond performance testing and firefighting when there are problems in production.

Mistake #6: Focusing only on firefighting at the infrastructure level and forgetting about your apps

A solid IT infrastructure is the backbone of any agile, scalable and successful business, so it’s natural to look for infrastructure monitoring first. But to reach the next stage of maturity as an IT organization, you might want to think beyond just infrastructure. IT organizations that are able to proactively improve and optimize performance gain credibility with the business and are looked on as strategic enablers of business value.

Quick fix

Dynatrace tracks every build moving through your delivery pipeline, every operations deployment, all user behavior, and the impact on your supporting infrastructure. It integrates with whichever technology stack you build on and whichever container based technology you’re using to orchestrate and manage your dynamic application environments on top of OpenStack. It provides a holistic view of your application, the technology stack, and OpenStack. Through analytics and artificial intelligence, you can start building what users want, remove what’s not needed, and optimize the remaining system to be lean, agile, and innovative.

What’s next?

At the end of the day, you want to focus on providing great user experience, not spending time fixing your infrastructure. To do that, you need a monitoring platform that delivers the skills today’s complex business applications require:

See how we can help to connect the dots from your different OpenStack infrastructure components all the way up to the application front end level – and provide great performance and user experience in your business-critical applications.

Have you done/seen any OpenStack monitoring mistakes that top these? Share your thoughts in the comments section below as I learn just as much from you as you do from me.

The post Six mistakes in your OpenStack monitoring process…and how to fix them appeared first on Dynatrace blog – monitoring redefined.

Read the original blog entry...

More Stories By Dynatrace Blog

Building a revolutionary approach to software performance monitoring takes an extraordinary team. With decades of combined experience and an impressive history of disruptive innovation, that’s exactly what we ruxit has.

Get to know ruxit, and get to know the future of data analytics.

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...