Blog Feed Post

Correlating JavaScript Errors with Slow CDN Performance

JavaScript Errors can happen for many different reasons: special behavior of certain browsers that weren’t tested; a real coding mistake that slipped through the delivery pipeline; poorly handled timeouts, and I am sure the list goes on.

In this blog we discuss yet a different reason which was brought to my attention by Gerald Madlsperger, one of our long term Dynatrace AppMon super users: A CDN server issue resulting in non-delivered static resource files (CSS) leading to spikes in JavaScript errors.

Let’s review the steps that Gerald took to identify and analyze the issue, and learn which metrics he looked at on both the end-user and server sides. And, because not everyone is fortunate enough to have an expert like Gerald in their team, we show you how Dynatrace automates these steps through our Problem Pattern Detection and with Artificial Intelligence.

The Impact: JavaScript Exception Spikes

The problem that Gerald dealt with was visible in a daily spike of JavaScript errors for a particular web application. The spike always occurred at the same time – between 10:30 and 10:35 a.m.:

Charting the number of captured JavaScript errors captured through real user monitoring from Dynatrace AppMon

The impact was seen across every browser and every geo location. So it was not something that they simply missed in testing, nor was it a problem related with connection or timeout issues in a certain geo location.

The Problem: Object not found errors

To learn more about these errors Gerald compared the type of errors occurring prior to the spike and those that occurred during the spike. He wanted to see whether there was a certain pattern, or type of JavaScript error, that occurred more often within that time frame, hoping that this would take him one step closer to the root cause.

It turned out the JavaScript errors that occurred more frequently during these five minutes were all around HTML objects that couldn’t be found on the page by some of the JavaScript code:

Comparing JavaScript Errors that happen in two different time frames makes it easy to see which Errors are causing the spike.

Therefore, the problem was not necessarily bad JavaScript code, but most likely related to components that were missing or couldn’t be loaded on the page.

Root Cause: Slow CDNs caused by bad CRON job

Gerald’s next step was to drill down to some of these real end-user browser sessions. He wanted to see whether there was anything else abnormal about them. Turns out that most of these users had one thing in common: a very slow responding CDN Server:

User Action PurePaths show that content from their CDN servers was extremely slow in downloading.

As a final step Gerald created the following chart where he correlates the number of JavaScript Errors with the Download Time from that CDN Server. Now it was clear that very day from 10:30 – 10:35 a.m. there was a download time spike on the CDN Server that correlated with the spike in JavaScript errors:

Clear correlation between slow CDN download times to spikes in the number of JavaScript errors.

CRON Jobs to be blamed

After discussing this data with the systems engineers it turned out that two of their CDN Servers ran the same CRON job for log file rotation at the exact same time every day. This resulted in a brief outage of the CDN. That outage caused a delay or failed loading of static CSS files which resulted in the JavaScript code generating “object not found” errors.

Better rely on Dynatrace in case Gerald is not there for you!

First: Hats off to Gerald for doing a great job digging through the Dynatrace AppMon data. Also, thanks for sharing the dashboards which are useful when dealing with CDNs or 3rd parties.

While Dynatrace AppMon collects all this data to make troubleshooting of these problems easier, it requires you to know how to navigate the data. Because of scenarios shared by Gerald and others over the years, we have made significant investment in automating error, problem and root-cause detection.

In the latest versions of Dynatrace AppMon (sign up for your lifetime AppMon Personal License) we automate problem pattern detection, and highlight the “Top Findings” for both End-User and Server-Side Performance Hotspots in the Dynatrace AppMon Web Interface:

Dynatrace AppMon automatically shows you the top findings on why end user or server side performance is impacted

In the Dynatrace SaaS/Managed platform (sign up for our Dynatrace SaaS trial) we went a step further by running all this data through our Artificial Intelligence Engine. A problem like the one Gerald detected would pop up in a Problem Ticket, and include the information on the Impact and Root Cause. This allows you to analyze and fix these problems, even if you don’t have an expert like Gerald on your team. It just means that you can spend more time on innovating, rather than bug hunting.

Dynatrace Artificial Intelligence automatically shows you Impact and Root Cause of any type of End User, Server Side or Infrastructure Issue.

If you have stories like this one that you want to share with your peers, please let us know. Send me an email to Share your PurePath or your Best Artificially Detected Problem Pattern.

The post Correlating JavaScript Errors with Slow CDN Performance appeared first on Dynatrace blog – monitoring redefined.

Read the original blog entry...

More Stories By Dynatrace Blog

Building a revolutionary approach to software performance monitoring takes an extraordinary team. With decades of combined experience and an impressive history of disruptive innovation, that’s exactly what we ruxit has.

Get to know ruxit, and get to know the future of data analytics.

Latest Stories
Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with lega...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, will provide a fun and simple way to introduce Machine Leaning to anyone and everyone. Together we will solve a machine learning problem and find an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intellige...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual b...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering ...
SYS-CON Events announced today that Ryobi Systems will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ryobi Systems Co., Ltd., as an information service company, specialized in business support for local governments and medical industry. We are challenging to achive the precision farming with AI. For more information, visit http:...
Amazon is pursuing new markets and disrupting industries at an incredible pace. Almost every industry seems to be in its crosshairs. Companies and industries that once thought they were safe are now worried about being “Amazoned.”. The new watch word should be “Be afraid. Be very afraid.” In his session 21st Cloud Expo, Chris Kocher, a co-founder of Grey Heron, will address questions such as: What new areas is Amazon disrupting? How are they doing this? Where are they likely to go? What are th...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...