Welcome!

Blog Feed Post

Applying Dynatrace AI into our Digital Performance Life: Best of December 2017

Dynatrace blog

Share Your PurePath was my personal program to help Dynatrace AppMon users make sense of their captured application performance data. I analyzed their exported PurePaths and sent my findings back in a PowerPoint. Thanks to several hundred users that sent me PurePaths in the last years, I’ve written numerous blogs based on the problems we discovered. Many of the detected patterns made it into an out-of-the-box feature in AppMon: PurePath Analysis using Automatic Problem Detection!

In our new Dynatrace world, most of this Analysis Magic happens automatically, behind the scenes and on a much larger data set, on a new scale. We invested in OneAgent (better quality full stack data), Anomaly Detection (multi-dimensional baselining) and the Dynatrace Artificial Intelligence. If you want to read how the AI works, check out my blog on Dynatrace AI Demystified.

But does it work outside of your demo environments?

Many of our AppMon users, folks that use competitive products and have seen a Dynatrace demo,  often wonder: “Looks great in the demo! BUT – what type of problems does Dynatrace detect in non-demo environments? How will it make my life as a Cloud Operator, SRE, DevOps Engineer or Performance Architect easier?”

Educate through Share your AI-Detected Problem!

To help shine a light on automatic problem detection in Dynatrace, I thought to start a new program that I call: “Share your AI-Detected Problem

Any Dynatrace user (paying or trial) can send me a link or screenshots to their Dynatrace AI-detected Problem(s). The purpose of this is not so much about diagnosing the captured data and finding root cause (that step has been automated), it is more about educating the larger digital performance community on what type of problems our AI detects and explains how to access the root cause data for faster problem resolution. I also share my thoughts building self-healing, auto-remediation scripts for these scenarios. I strongly believe that this is going to be our next major task in our self-driven IT industry!

Now, for this blog I picked three simple scenarios:

  • 3rd party Gemfire Service Outage resulting in high end user service failure rate
  • Broken links (HTTP 404) on new rolled out features on Dynatrace Partner Portal
  • Slow disk on EC2 causing Nginx errors and impacting dynatrace.com slowdown!

Problem Ticket #1: Gemfire Service Outage

This problem was detected during a recent Dynatrace Proof of Concept. Special thanks to my colleagues Lauren, Jeff, Matt and Andrew for sharing this story. They forwarded me email exchanges with the prospect – highlighting the detected impact and the actual root cause. For data privacy reasons, the screenshots have been blurred but I think you can see how helpful the AI was in this particular case:

Step #1: Everything Starts with a Problem Ticket

Every time Dynatrace detects a problem, it opens a problem ticket which stays open until the problem impact was resolved. Dynatrace captures all relevant events while the problem is impacting your end users and SLAs. In demos, we most often show the problem details and each automatically correlated event (log messages, infrastructure problems, configuration changes, response time hotspots …) in the Dynatrace UI. In production environments or during Proof of Concepts, our users typically trigger notifications via the Dynatrace Incident Notification Integration.  (e.g. send the details to ServiceNow, PagerDuty, VictorOps, OpsGenie, a Lambda Function, our mobile app…)

Now, let’s get to the first shared problem. The following screenshot is what Dynatrace shows in the problem overview screen for each detected problem. Dynatrace automatically detected that multiple services were impacted over a period of 1h 53mins. It lists all impacted services by name and gives us information about how many service requests were actually impacted.

Problem ticket overview: 1h 53m impact. A canary service in the cloud. Impact ALL 1.55k dynamic requests per minutes!https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC-300x190.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC-200x127.png 200w" sizes="(min-width: 900px) 900px, 100vw" />
Problem ticket overview: 1h 53m impact. A canary service in the cloud. Impact ALL 1.55k dynamic requests per minutes!

Step #2: Exploring Problem Details

Full Problem Details –also accessible via the Problem REST API – shows us just how much data and dependencies Dynatrace analyzed for us, the actual problem, the impact and the root cause:

Dynatrace analyzed 285mio dependencies and data points and tells us which service endpoints are suffering from performance and failure rate spikes!https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_1-300x242.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_1-200x162.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_1-400x323.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_1-600x485.png 600w" sizes="(min-width: 900px) 900px, 100vw" />
Dynatrace analyzed 285mio dependencies and data points and tells us which service endpoints are suffering from performance and failure rate spikes!

Step #3: Clicking on the Impacted Service to Find Root Cause

On the problem ticket, we can either click into the Impacted Service or into the detected Root Cause section. In our case, the next click is on the Impacted Service – Failure Rate, which has increased to 93%. This brings us to the automated baseline graph, showing how Dynatrace detected this anomaly. All of this happened fully automated, without having to configure any thresholds, or without having to tell Dynatrace which services and endpoints the service offers. Just install the OneAgent on your hosts. The rest is auto-detected. That’s true zero-configuration monitoring.

In the baseline graph, which is available for all service endpoints across multiple dimensions, the problematic time range gets automatically marked by Dynatrace due to its abnormal behavior:

Failure Rate spike and drop in throughput automatically detected by Dynatrace on that particular service – impacting ALL dynamic requestshttps://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-300x137.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-768x351.png 768w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-200x91.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-400x183.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-600x274.png 600w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-800x366.png 800w" sizes="(min-width: 900px) 900px, 100vw" />
Failure Rate spike and drop in throughput automatically detected by Dynatrace on that particular service – impacting ALL dynamic requests

Tip: Notice the different diagnostics options in the screen above, such as switching between Failure rate and HTTP errors, analyzing Response Time, CPU or Throughput issues (the top tabs) or clicking on the next diagnostics options such as View details of failures or Analyze backtraces. If you want to learn more about these diagnostics options, I suggest you watch my recent Performance Clinic on Basic Diagnostics with Dynatrace.

In our case, we want to see the actual root cause of the increased failure rate. Clicking on View details of failures brings us to that answer:

All failures are HTTP 500s caused by an unavailable Gemfire cache service. https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_3-293x300.png 293w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_3-200x205.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_3-400x410.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_3-600x615.png 600w" sizes="(min-width: 900px) 900px, 100vw" />
All failures are HTTP 500s caused by an unavailable Gemfire cache service.

Clicking on the Details button in the bottom left even reveals the actual code that tries to call Gemfire but fails with the ServerRefusedConnectionException.

Summary: The external cache service Gemfire became unavailable. This caused requests on our monitored service to receive HTTP 500s from Gemfire, which ultimately led to higher failure rate back to the end user. If the host running Gemfire would have been instrumented with a OneAgent as well, the AI would have automatically pointed us to the crash of that process which ultimately turned out to be the issue.

Self-Healing thoughts: In a recent blog, I started to write about Self-Healing and started to list a couple of auto-remediating examples. In this scenario, we could write self-healing scripts that validate why Gemfire is refusing network connections. It could be a crashed service, a network issue or a configuration issue on the connection pools on both ends (caller and callee). Using the Dynatrace REST API allows us to write better mitigation actions, because all this root cause data is exposed in the context of the actual end user impacting problem.

Problem Ticket #2: Functional Issues on new Feature Rollout

The next problem ticket is from our own Dynatrace production environment we use to monitor our key web properties such as our website, blog, community, support portal … – let’s take-a-peek!

Step #1: Problem Ticket Details

Problem 668 was a problem I looked at while it was still ongoing – hence the color of the problem still being red. This is indicating that the problem has been open for the last 22 minutes. This problem shows us that Dynatrace not only detects anomalies for the whole service, but also on individual service or REST endpoints as well (that’s the automated multi-dimensional baselining capability). In case of Problem 668, Dynatrace detected a Failure Rate increase to 13% on a special endpoint we expose on www.dynatrace.com:

Dynatrace highlights the impact to come from the backend nginx caching layer causing a 13% failure rate on a single endpoint that we expose via <a href=www.dynatrace.com. Fortunately, just a single user impacted!" height="343" srcset="https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1024x343.png 1024w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-300x101.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-768x257.png 768w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-200x67.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-400x134.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-600x201.png 600w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-800x268.png 800w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1000x335.png 1000w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1200x402.png 1200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1400x469.png 1400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1600x536.png 1600w" sizes="(min-width: 900px) 900px, 100vw" />
Dynatrace highlights the impact to come from the backend nginx caching layer causing a 13% failure rate on a single endpoint that we expose via www.dynatrace.com. Fortunately, just a single user impacted!

Step #2: Root Cause Analysis

At first, it almost seems odd that Dynatrace alerts just because one user is having an issue. But once we dig deeper, we understand why!

Clicking on the Impacted Service details brings us to the Failure Rate graph for www.dynatrace.com. The view gets automatically filtered to the problematic endpoint which is /data/rfopartner.json. The sudden jump in failure rate triggered the creation of an anomaly event which then resulted into creating that problem ticket:

Clicking on impacted services or root cause brings us to diagnostics details view which is automatically filtered to the right endpoint and timeframe.https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-300x145.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-768x371.png 768w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-200x97.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-400x193.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-600x290.png 600w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-800x386.png 800w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-1000x483.png 1000w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-1200x579.png 1200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-1400x676.png 1400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-1600x772.png 1600w" sizes="(min-width: 900px) 900px, 100vw" />
Clicking on impacted services or root cause brings us to diagnostics details view which is automatically filtered to the right endpoint and timeframe.

Root cause details for that failure rate spike are just one click away: Analyze failure rate degradation!

17 Broken Link Requests all coming from the same internal dynalabs.io domain.https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-300x253.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-768x648.png 768w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-200x169.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-400x337.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-600x506.png 600w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-800x675.png 800w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-1000x843.png 1000w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-1200x1012.png 1200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4.png 1277w" sizes="(min-width: 900px) 900px, 100vw" />
17 Broken Link Requests all coming from the same internal dynalabs.io domain.

Knowing that these 404s are “only” coming from an internal site is good news, as no real end user has yet seen that problem. But why is that? Turns out that this internal domain was a test site that is used to validate a new feature on our partner portal, that was soon to be released. Automatically detecting this behavior allows our partner portal website team to fix this problem of incorrect links, before deploying this version to the live system. You should check out a YouTube video I did with Stefan Gusenbauer, who showed us how we use Dynatrace internally in combination with automated functional regression tests. Instead of just relying on the functional test results, we can combine the functional test results with the data Dynatrace captured.

Back to this problem ticket: The actual root cause of the 404 was in a service hosted on nginx that connects some of the new capabilities of the partner portal with some legacy data. The PurePaths captured for these errors show exactly that the 404s originate in the legacy connector service and how these 404s propagate back to the www.dynatrace.com.

Our beloved PurePath                 </div>
      
                                  <p class=Read the original blog entry...

More Stories By APM Blog

APM: It’s all about application performance, scalability, and architecture: best practices, lifecycle and DevOps, mobile and web, enterprise, user experience

Latest Stories
This session will provide an introduction to Cloud driven quality and transformation and highlight the key features that comprise it. A perspective on the cloud transformation lifecycle, transformation levers, and transformation framework will be shared. At Cognizant, we have developed a transformation strategy to enable the migration of business critical workloads to cloud environments. The strategy encompasses a set of transformation levers across the cloud transformation lifecycle to enhance ...
Your job is mostly boring. Many of the IT operations tasks you perform on a day-to-day basis are repetitive and dull. Utilizing automation can improve your work life, automating away the drudgery and embracing the passion for technology that got you started in the first place. In this presentation, I'll talk about what automation is, and how to approach implementing it in the context of IT Operations. Ned will discuss keys to success in the long term and include practical real-world examples. Ge...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
So the dumpster is on fire. Again. The site's down. Your boss's face is an ever-deepening purple. And you begin debating whether you should join the #incident channel or call an ambulance to deal with his impending stroke. Yes, we know this is a developer's fault. There's plenty of time for blame later. Postmortems have a macabre name because they were once intended to be Viking-like funerals for someone's job. But we're civilized now. Sort of. So we call them post-incident reviews. Fires are ne...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
Hackers took three days to identify and exploit a known vulnerability in Equifax’s web applications. I will share new data that reveals why three days (at most) is the new normal for DevSecOps teams to move new business /security requirements from design into production. This session aims to enlighten DevOps teams, security and development professionals by sharing results from the 4th annual State of the Software Supply Chain Report -- a blend of public and proprietary data with expert researc...
CloudEXPO New York 2018, colocated with DevOpsSUMMIT and DXWorldEXPO New York 2018 will be held November 12-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI and Machine Learning to one location.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, softwar...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
The digital transformation is real! To adapt, IT professionals need to transform their own skillset to become more multi-dimensional by gaining both depth and breadth of a wide variety of knowledge and competencies. Historically, while IT has been built on a foundation of specialty (or "I" shaped) silos, the DevOps principle of "shifting left" is opening up opportunities for developers, operational staff, security and others to grow their skills portfolio, advance their careers and become "T"-sh...
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
ICC is a computer systems integrator and server manufacturing company focused on developing products and product appliances to meet a wide range of computational needs for many industries. Their solutions provide benefits across many environments, such as datacenter deployment, HPC, workstations, storage networks and standalone server installations. ICC has been in business for over 23 years and their phenomenal range of clients include multinational corporations, universities, and small busines...
This sixteen (16) hour course provides an introduction to DevOps, the cultural and professional movement that stresses communication, collaboration, integration and automation in order to improve the flow of work between software developers and IT operations professionals. Improved workflows will result in an improved ability to design, develop, deploy and operate software and services faster.
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...