Welcome!

Blog Feed Post

Automating Shared Infrastructure Impact Analysis: Why Monitoring Backend Jobs is as important as monitoring applications

This posting illustrates how to effectively automate shared infrastructure analysis to support both your back-end jobs and your applications.

Last week Gerald, one of our Dynatrace AppMon super users, sent me a PurePath as part of my Share Your PurePath program. He wanted to get my opinion on high I/O time they sporadically see in some of their key transactions on their Adobe based documentation management system. The hotspot he tried to understand was easy to spot in the PurePath for one of the slow transactions: Creating a local file took very long!

createFileExclusively takes up to 18s in one of their key document management system transactions

In order to find out which file takes that long to create I asked Gerald to instrument File.createNewFile. This was now capturing the actual file name for all transactions. Turned out we are talking about files put into a temporary directory on the local D: drive.

Instrumenting createNewFile makes it easy to see which files were created across many transactions that got executed

Now – this itself didn’t explain why creating these files in that directory was slow. As a next step, we looked at process and host metrics of that machine delivered by the Dynatrace Agent. I wanted to see whether there is anything suspicious.

The Process Health indicated that there is some very aggressive memory allocation going on within that JBoss. Multiple times a minute the Young Generation Heap spikes to almost 5GB before Garbage Collection (GC) kicks in. These spikes also coincide with high CPU Utilization of that process – which makes sense because the GC needs to clean up memory and that can be very CPU intensive. Another thing I noticed was a constant high number of active threads on that JBoss instance which correlates with the high volume of transactions that are actively processed:

Process Health metrics give you a good indication on whether there is anything suspiciously going on, such as strange memory allocation patterns, throughput spikes or problems with threading.

Looking at the Host Health Metrics just confirmed what I’ve seen in the process metrics. CPU spikes caused by high GC due to these memory allocations. I blamed the high number of Page Faults on the same memory behavior. As we deal with high Disk I/O I looked at the disk metrics – especially for drive D:. Seems though that there was no real problem. Also, consulting with the Sys Admin brought no new insights.

Host Health metrics are a good way to see whether there is anything wrong on that host that potentially impacts the running applications, e.g.: constraint on CPU, Memory, Disk, or Network.

As there was no clear indication of a general I/O issue with the disks I asked Gerald to look at file handles for that process and on that machine. Something had to block or slow down File I/O when trying to create files in that directory. Maybe other processes on that same machine that are currently not monitored by Dynatrace or some background threads on that JBoss is having too many open file handles leading to the strange I/O waiting times of the core business transactions?

Background Scheduler to be blamed!

Turned out that looking into background activity was the right path to follow! Gerald created CPU Sample using Dynatrace on that JBoss instance. Besides capturing PurePaths for active business transactions it is really handy that we can also just create CPU Samples, Thread Dumps or Memory Dumps for the process that has a Dynatrace AppMon Agent injected.

The CPU sample showed a very suspicious background thread. The following screenshot of the Dynatrace CPU Sample highlights that one background thread is not only causing high File I/O through the File.delete operation. It also causes high CPU through Adobe’s WatchFolderUtils.deleteMarkedFiles method which ultimately deletes all these files. Some “Google’ing” helped us learn that this method is part of a background job that iterates through that temporary directory on D. The job tries to find files that match a certain criterion, marks them, and eventually deletes them.

The CPU Sample helped identify the background job that causes high CPU as well blocked I/O access to that directory on drive D

A quick chat with the Adobe administrator resulted in the following conclusions:

  1. The File Clean Task is scheduled to run every minute – probably too frequent!
  2. Very often this task can’t complete within 1 minute which leads to a backup and clash of the next clean up task
  3. Due to some misconfiguration, the cleanup job didn’t clean up all files it was supposed to. That lead to many “leftovers” which had to be iterated by the Watch Utility every minute leading to even longer task completion time.

The longer the system was up and running, the more files were “leftover” in that folder. This led to even more impact on the application itself as file access to that folder was constantly blocked by that background job.

Dynatrace: Monitoring redefined

While this is a great success story it shows that it is very important to monitor all components that your applications share infrastructure with. Gerald and his team were lucky that this background process job was actually part of the regular JBoss instance that ran the application and that it was monitored with a Dynatrace AppMon Agent. If the job would be running in a separate process or even on a separate machine it would be harder to do root cause analysis.

Exactly for that reason we extended the Dynatrace monitoring capabilities. Our Dynatrace OneAgent is not only monitoring individual applications but additionally automatically monitors all services, processes, network connections and its dependencies on the underlying infrastructure.

Dynatrace automatically captures full host and process metrics through its OneAgent technology

Applying artificial intelligence on top of that rich data allows us to find such a problem automatically without having experts like Gerald or his team perform forensic steps.

The following screenshot shows how Dynatrace packages such an issue in what we call a “Problem Description” including the full “Problem Evolution”. The Problem Evolution is like a film strip where you can see which data points Dynatrace automatically analyzed to identify that this is part of the root cause. The architectural diagram also shows you how your components relate and cross impact each other until they impact the bottom line: which is the end user, performance or your SLAs.

Dynatrace automates problem and impact analysis and allows you to “reply” Problem Evolution to better understand how to address the issue.

If you are interested to learn more about Dynatrace, the new One Agent as well as Artificial Intelligence simply try it yourself. Sign up for the SaaS-based Dynatrace Free Trial.

If you want explore Dynatrace AppMon go ahead and get your own Dynatrace AppMon Personal License for your On-Premise installation.

The post Automating Shared Infrastructure Impact Analysis: Why Monitoring Backend Jobs is as important as monitoring applications appeared first on Dynatrace blog – monitoring redefined.

Read the original blog entry...

More Stories By Dynatrace Blog

Building a revolutionary approach to software performance monitoring takes an extraordinary team. With decades of combined experience and an impressive history of disruptive innovation, that’s exactly what we ruxit has.

Get to know ruxit, and get to know the future of data analytics.

Latest Stories
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Analytic. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in compute, storage and networking technologies, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
Amazon has gradually rolled out parts of its IoT offerings in the last year, but these are just the tip of the iceberg. In addition to optimizing their back-end AWS offerings, Amazon is laying the ground work to be a major force in IoT – especially in the connected home and office. Amazon is extending its reach by building on its dominant Cloud IoT platform, its Dash Button strategy, recently announced Replenishment Services, the Echo/Alexa voice recognition control platform, the 6-7 strategic...
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
Everyone wants to use containers, but monitoring containers is hard. New ephemeral architecture introduces new challenges in how monitoring tools need to monitor and visualize containers, so your team can make sense of everything. In his session at @DevOpsSummit, David Gildeh, co-founder and CEO of Outlyer, will go through the challenges and show there is light at the end of the tunnel if you use the right tools and understand what you need to be monitoring to successfully use containers in your...
Developers want to create better apps faster. Static clouds are giving way to scalable systems, with dynamic resource allocation and application monitoring. You won't hear that chant from users on any picket line, but helping developers to create better apps faster is the mission of Lee Atchison, principal cloud architect and advocate at New Relic Inc., based in San Francisco. His singular job is to understand and drive the industry in the areas of cloud architecture, microservices, scalability ...
Data is an unusual currency; it is not restricted by the same transactional limitations as money or people. In fact, the more that you leverage your data across multiple business use cases, the more valuable it becomes to the organization. And the same can be said about the organization’s analytics. In his session at 19th Cloud Expo, Bill Schmarzo, CTO for the Big Data Practice at Dell EMC, introduced a methodology for capturing, enriching and sharing data (and analytics) across the organization...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
Grape Up is a software company, specialized in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the USA and Europe, we work with a variety of customers from emerging startups to Fortune 1000 companies.
Financial Technology has become a topic of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 20th Cloud Expo at the Javits Center in New York, June 6-8, 2017, will find fresh new content in a new track called FinTech.
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add...