Welcome!

Blog Feed Post

Applying Dynatrace AI into our Digital Performance Life: Best of December 2017

Dynatrace blog

Share Your PurePath was my personal program to help Dynatrace AppMon users make sense of their captured application performance data. I analyzed their exported PurePaths and sent my findings back in a PowerPoint. Thanks to several hundred users that sent me PurePaths in the last years, I’ve written numerous blogs based on the problems we discovered. Many of the detected patterns made it into an out-of-the-box feature in AppMon: PurePath Analysis using Automatic Problem Detection!

In our new Dynatrace world, most of this Analysis Magic happens automatically, behind the scenes and on a much larger data set, on a new scale. We invested in OneAgent (better quality full stack data), Anomaly Detection (multi-dimensional baselining) and the Dynatrace Artificial Intelligence. If you want to read how the AI works, check out my blog on Dynatrace AI Demystified.

But does it work outside of your demo environments?

Many of our AppMon users, folks that use competitive products and have seen a Dynatrace demo,  often wonder: “Looks great in the demo! BUT – what type of problems does Dynatrace detect in non-demo environments? How will it make my life as a Cloud Operator, SRE, DevOps Engineer or Performance Architect easier?”

Educate through Share your AI-Detected Problem!

To help shine a light on automatic problem detection in Dynatrace, I thought to start a new program that I call: “Share your AI-Detected Problem

Any Dynatrace user (paying or trial) can send me a link or screenshots to their Dynatrace AI-detected Problem(s). The purpose of this is not so much about diagnosing the captured data and finding root cause (that step has been automated), it is more about educating the larger digital performance community on what type of problems our AI detects and explains how to access the root cause data for faster problem resolution. I also share my thoughts building self-healing, auto-remediation scripts for these scenarios. I strongly believe that this is going to be our next major task in our self-driven IT industry!

Now, for this blog I picked three simple scenarios:

  • 3rd party Gemfire Service Outage resulting in high end user service failure rate
  • Broken links (HTTP 404) on new rolled out features on Dynatrace Partner Portal
  • Slow disk on EC2 causing Nginx errors and impacting dynatrace.com slowdown!

Problem Ticket #1: Gemfire Service Outage

This problem was detected during a recent Dynatrace Proof of Concept. Special thanks to my colleagues Lauren, Jeff, Matt and Andrew for sharing this story. They forwarded me email exchanges with the prospect – highlighting the detected impact and the actual root cause. For data privacy reasons, the screenshots have been blurred but I think you can see how helpful the AI was in this particular case:

Step #1: Everything Starts with a Problem Ticket

Every time Dynatrace detects a problem, it opens a problem ticket which stays open until the problem impact was resolved. Dynatrace captures all relevant events while the problem is impacting your end users and SLAs. In demos, we most often show the problem details and each automatically correlated event (log messages, infrastructure problems, configuration changes, response time hotspots …) in the Dynatrace UI. In production environments or during Proof of Concepts, our users typically trigger notifications via the Dynatrace Incident Notification Integration.  (e.g. send the details to ServiceNow, PagerDuty, VictorOps, OpsGenie, a Lambda Function, our mobile app…)

Now, let’s get to the first shared problem. The following screenshot is what Dynatrace shows in the problem overview screen for each detected problem. Dynatrace automatically detected that multiple services were impacted over a period of 1h 53mins. It lists all impacted services by name and gives us information about how many service requests were actually impacted.

Problem ticket overview: 1h 53m impact. A canary service in the cloud. Impact ALL 1.55k dynamic requests per minutes!https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC-300x190.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC-200x127.png 200w" sizes="(min-width: 900px) 900px, 100vw" />
Problem ticket overview: 1h 53m impact. A canary service in the cloud. Impact ALL 1.55k dynamic requests per minutes!

Step #2: Exploring Problem Details

Full Problem Details –also accessible via the Problem REST API – shows us just how much data and dependencies Dynatrace analyzed for us, the actual problem, the impact and the root cause:

Dynatrace analyzed 285mio dependencies and data points and tells us which service endpoints are suffering from performance and failure rate spikes!https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_1-300x242.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_1-200x162.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_1-400x323.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_1-600x485.png 600w" sizes="(min-width: 900px) 900px, 100vw" />
Dynatrace analyzed 285mio dependencies and data points and tells us which service endpoints are suffering from performance and failure rate spikes!

Step #3: Clicking on the Impacted Service to Find Root Cause

On the problem ticket, we can either click into the Impacted Service or into the detected Root Cause section. In our case, the next click is on the Impacted Service – Failure Rate, which has increased to 93%. This brings us to the automated baseline graph, showing how Dynatrace detected this anomaly. All of this happened fully automated, without having to configure any thresholds, or without having to tell Dynatrace which services and endpoints the service offers. Just install the OneAgent on your hosts. The rest is auto-detected. That’s true zero-configuration monitoring.

In the baseline graph, which is available for all service endpoints across multiple dimensions, the problematic time range gets automatically marked by Dynatrace due to its abnormal behavior:

Failure Rate spike and drop in throughput automatically detected by Dynatrace on that particular service – impacting ALL dynamic requestshttps://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-300x137.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-768x351.png 768w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-200x91.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-400x183.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-600x274.png 600w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_2-800x366.png 800w" sizes="(min-width: 900px) 900px, 100vw" />
Failure Rate spike and drop in throughput automatically detected by Dynatrace on that particular service – impacting ALL dynamic requests

Tip: Notice the different diagnostics options in the screen above, such as switching between Failure rate and HTTP errors, analyzing Response Time, CPU or Throughput issues (the top tabs) or clicking on the next diagnostics options such as View details of failures or Analyze backtraces. If you want to learn more about these diagnostics options, I suggest you watch my recent Performance Clinic on Basic Diagnostics with Dynatrace.

In our case, we want to see the actual root cause of the increased failure rate. Clicking on View details of failures brings us to that answer:

All failures are HTTP 500s caused by an unavailable Gemfire cache service. https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_3-293x300.png 293w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_3-200x205.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_3-400x410.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemC_3-600x615.png 600w" sizes="(min-width: 900px) 900px, 100vw" />
All failures are HTTP 500s caused by an unavailable Gemfire cache service.

Clicking on the Details button in the bottom left even reveals the actual code that tries to call Gemfire but fails with the ServerRefusedConnectionException.

Summary: The external cache service Gemfire became unavailable. This caused requests on our monitored service to receive HTTP 500s from Gemfire, which ultimately led to higher failure rate back to the end user. If the host running Gemfire would have been instrumented with a OneAgent as well, the AI would have automatically pointed us to the crash of that process which ultimately turned out to be the issue.

Self-Healing thoughts: In a recent blog, I started to write about Self-Healing and started to list a couple of auto-remediating examples. In this scenario, we could write self-healing scripts that validate why Gemfire is refusing network connections. It could be a crashed service, a network issue or a configuration issue on the connection pools on both ends (caller and callee). Using the Dynatrace REST API allows us to write better mitigation actions, because all this root cause data is exposed in the context of the actual end user impacting problem.

Problem Ticket #2: Functional Issues on new Feature Rollout

The next problem ticket is from our own Dynatrace production environment we use to monitor our key web properties such as our website, blog, community, support portal … – let’s take-a-peek!

Step #1: Problem Ticket Details

Problem 668 was a problem I looked at while it was still ongoing – hence the color of the problem still being red. This is indicating that the problem has been open for the last 22 minutes. This problem shows us that Dynatrace not only detects anomalies for the whole service, but also on individual service or REST endpoints as well (that’s the automated multi-dimensional baselining capability). In case of Problem 668, Dynatrace detected a Failure Rate increase to 13% on a special endpoint we expose on www.dynatrace.com:

Dynatrace highlights the impact to come from the backend nginx caching layer causing a 13% failure rate on a single endpoint that we expose via <a href=www.dynatrace.com. Fortunately, just a single user impacted!" height="343" srcset="https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1024x343.png 1024w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-300x101.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-768x257.png 768w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-200x67.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-400x134.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-600x201.png 600w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-800x268.png 800w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1000x335.png 1000w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1200x402.png 1200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1400x469.png 1400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA-1600x536.png 1600w" sizes="(min-width: 900px) 900px, 100vw" />
Dynatrace highlights the impact to come from the backend nginx caching layer causing a 13% failure rate on a single endpoint that we expose via www.dynatrace.com. Fortunately, just a single user impacted!

Step #2: Root Cause Analysis

At first, it almost seems odd that Dynatrace alerts just because one user is having an issue. But once we dig deeper, we understand why!

Clicking on the Impacted Service details brings us to the Failure Rate graph for www.dynatrace.com. The view gets automatically filtered to the problematic endpoint which is /data/rfopartner.json. The sudden jump in failure rate triggered the creation of an anomaly event which then resulted into creating that problem ticket:

Clicking on impacted services or root cause brings us to diagnostics details view which is automatically filtered to the right endpoint and timeframe.https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-300x145.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-768x371.png 768w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-200x97.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-400x193.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-600x290.png 600w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-800x386.png 800w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-1000x483.png 1000w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-1200x579.png 1200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-1400x676.png 1400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_3-1600x772.png 1600w" sizes="(min-width: 900px) 900px, 100vw" />
Clicking on impacted services or root cause brings us to diagnostics details view which is automatically filtered to the right endpoint and timeframe.

Root cause details for that failure rate spike are just one click away: Analyze failure rate degradation!

17 Broken Link Requests all coming from the same internal dynalabs.io domain.https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-300x253.png 300w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-768x648.png 768w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-200x169.png 200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-400x337.png 400w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-600x506.png 600w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-800x675.png 800w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-1000x843.png 1000w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4-1200x1012.png 1200w, https://dt-cdn.net/wp-content/uploads/2017/12/ProblemA_4.png 1277w" sizes="(min-width: 900px) 900px, 100vw" />
17 Broken Link Requests all coming from the same internal dynalabs.io domain.

Knowing that these 404s are “only” coming from an internal site is good news, as no real end user has yet seen that problem. But why is that? Turns out that this internal domain was a test site that is used to validate a new feature on our partner portal, that was soon to be released. Automatically detecting this behavior allows our partner portal website team to fix this problem of incorrect links, before deploying this version to the live system. You should check out a YouTube video I did with Stefan Gusenbauer, who showed us how we use Dynatrace internally in combination with automated functional regression tests. Instead of just relying on the functional test results, we can combine the functional test results with the data Dynatrace captured.

Back to this problem ticket: The actual root cause of the 404 was in a service hosted on nginx that connects some of the new capabilities of the partner portal with some legacy data. The PurePaths captured for these errors show exactly that the 404s originate in the legacy connector service and how these 404s propagate back to the www.dynatrace.com.

Our beloved PurePath                 </div>
      
                                  <p class=Read the original blog entry...

More Stories By APM Blog

APM: It’s all about application performance, scalability, and architecture: best practices, lifecycle and DevOps, mobile and web, enterprise, user experience

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...