Click here to close now.




















Welcome!

Related Topics: @DevOpsSummit, Java IoT, Microsoft Cloud, Linux Containers

@DevOpsSummit: Blog Post

Choosing the #APM System that Is Right for You | @DevOpsSummit [#DevOps]

A lot of the arguing in the APM space is about the fundamental approach to monitoring application transactions

In my role as technology evangelist I spend a lot of time helping organizations, big and small, make their IT systems better, faster and more resilient to faults in order to support their business operations and objectives. I always find it frustrating to "argue" with our competitors about what the best solution is. I honestly think that many APM tools on the market do a good job - each with advantages and disadvantages in certain use cases. There is no "one size fits all" - there is just a "this tool fits best for your APM Maturity Level" (not saying the others wouldn't do a good job).

A lot of the arguing in the APM space is about the fundamental approach to monitoring application transactions: monitor and capture ALL details vs. monitor and capture relevant details. Along with that come topics like "overhead impact", "scalability" and "data hording vs smart analytics".

Ultimately, you want to pick the right tool to solve your problems. As you have multiple tools to choose from let me - in my role as technology evangelist - highlight some of the use cases that our customers solve. As a technologist and a blogger, what I really care about is that the right technology is applied to the right problem. As such, I feel compelled to share what I have learned working with customers in the trenches. Hopefully, this will help you understand the technology and what problem it can solve in real life problems, and cut through the propaganda. Let me start with a few use cases today and follow up with some more in follow up blog posts.

Use Cases from Steven - A Performance Engineer
The first use cases are picked from Steven - whom I reached out to after I read his question on our APM Community Forum. His company decided to move from a competitor to our APM solution and I wondered why. In an email, he highlighted that he had some initial success with the tool, and had been able to solve a couple of low hanging problems. When they decided to start taking a strategic Continuous Delivery approach to software delivery, they realized that the current tool had certain shortcomings slowing their attempts to practice DevOps.

They identified the following key problems they need to solve and what they really required from an APM solution in order to get to where they are heading:

How a user got to a problem, and not just seeing the problem itself

  • Every transaction, with all details they need, out-of-the-box
  • Web request/response bytes, SQL bind values, exception details for every transaction

Number of transactions executed per user and tenant used for business and cost reporting

  • Capture custom business context data for every transaction
  • Business transactions based on "buried" context data as not every detail is in the URL

Eliminate homegrown tools which are costly to maintain

  • Provide application as well as system and infrastructure monitoring
  • Integrate with other tools such as JMeter, LoadRunner, Jenkins or HP Open View

Eliminate the need to make people look at other tools and data

  • Foster collaboration across Architect, Dev, Test & Ops by using same data set
  • Data must be shareable with a single click

Ability to extend to custom frameworks, systems and protocols

  • Bring in custom metrics from external tools via Java Plugin infrastructure
  • Follow transactions across any custom protocol or technologies outside Java & .NET

Full Automation to support Continuous Delivery

  • Use Metrics provided by APM for every build artifact along the deployment pipeline to act as quality gateway
  • Inform APM about new deployments to prevent false alerting

Replace traditional application logging

  • Eliminated log files which saves I/O and storage
  • Get the log messages captured in context of a transaction and the context of the user that triggered that log message

One solution for everything

  • Not just performance monitoring but also business reporting as well as deep dive diagnostics

Active community forum

  • Get answers right away
  • Leverage extensions already provided by the community such as plugins for Jenkins, PagerDuty, ...

Let me give you some examples for Steven's use case so that you can better decide on whether that is relevant for you as well:

Every Transaction with All Details
dynaTrace was built from the ground up to support the full software lifecycle. We as Compuware APM/dynaTrace understood that we needed a technology that captures every transaction with all details for root cause diagnostics as well as proper business monitoring without falling into a sampling mode where you lose critical information for both business and root cause diagnostics. Most of our customers claim they see little to acceptable overhead in production yet capturing 100% transactions including method arguments, SQL Statements, Log Messages or Exceptions. The magic word in our case is our PurePath (see the YouTube video) & PureStack Technology which allows dynaTrace to do exactly that. One of the several visualization of the PurePath is the Transaction Flow which is a great way to understand how your transactions flow through the system - where your hotspots are (3rd party impact, custom code issues or impact of Garbage Collection) and where your architectural issues (e.g: too many web service calls, too many SQL executions):

Transaction Flow: One View that tells it all to Devs, Architects and Operations Teams

What if you don't capture all transactions but be "smart" and focus on capturing the problematic ones? While this approach allows you to find and fix the easy-to-find problems that can be analyzed by analyzing those transactions that fail or violate the average response-time based baseline, it falls short when it comes to problems that are caused by transactions that are not "outside the norm". One example here is a database deadlock we recently analyzed for a customer. The "smart" approach only highlighted the transaction that hit the deadlock but no information was captured for those transactions actually causing the deadlock with their data manipulations. Being able to see which transactions executed which UPDATE statements at the time leading up to the deadlock is required to solve this problem.

As companies - such as Steven's - are getting into a maturity level where they grow out of "smart" average response time-based analysis it is important to have the ability to look at everything and not just the average problem. As a follow up read the blog Why Averages Suck and Percentiles are great!

Capture Custom Business Context
What is Custom Business Context? The actual business function executed such as a "Create Claim", "Transfer Money," or the name of the user or tenant of your system. Why is this not as easy as it sounds? Because many applications just don't show the business function as part of the URL or provide the user name in a cookie. A great example was given in a webinar by NJM Insurance (New Jersey Manufacturing Insurance). They were using a third-party claim management software which was designed to "hide" everything behind a claimCenter.do URL. In their case they needed dynaTrace to analyze every single transaction and pick a method argument invoked in the business layer of their app to figure out which function in their system was actually executed. On top of that they also needed to know the user that executed that function because they needed to understand which insurance office and group of employees created how many claims as they needed this for their quarterly business reports. The following shows business reporting based on the user role where the user role gets captured from a method argument within the business logic of the application:

Business Reporting requires Business Context data for every Transaction

This was only possible because dynaTrace allows you to selectively capture business context in the context of every single executed transaction. Along the PurePath you will then see things like method arguments, return values, bind values, session variables, HTTP parameters or cookie values. All to be later used for your business reporting or targeted root cause diagnostics. Here is a follow up blog post that explains business transactions in more technical detail.

For more APM Buyer's tips, and for further insight, click here for the full article.

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Learn how you can use the CoSN SEND II Decision Tree for Education Technology to make sure that your K–12 technology initiatives create a more engaging learning experience that empowers students, teachers, and administrators alike.
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usag...
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacent...
Mobile, social, Big Data, and cloud have fundamentally changed the way we live. “Anytime, anywhere” access to data and information is no longer a luxury; it’s a requirement, in both our personal and professional lives. For IT organizations, this means pressure has never been greater to deliver meaningful services to the business and customers.
As organizations shift towards IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection and E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 17th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships at Com...
SYS-CON Events announced today that VividCortex, the monitoring solution for the modern data system, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The database is the heart of most applications, but it’s also the part that’s hardest to scale, monitor, and optimize even as it’s growing 50% year over year. VividCortex is the first unified suite of database monitoring tools specifically desi...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
In their session at 17th Cloud Expo, Hal Schwartz, CEO of Secure Infrastructure & Services (SIAS), and Chuck Paolillo, CTO of Secure Infrastructure & Services (SIAS), provide a study of cloud adoption trends and the power and flexibility of IBM Power and Pureflex cloud solutions. In his role as CEO of Secure Infrastructure & Services (SIAS), Hal Schwartz provides leadership and direction for the company.
SYS-CON Events announced today that MobiDev, a software development company, will exhibit at the 17th International Cloud Expo®, which will take place November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software development company with representative offices in Atlanta (US), Sheffield (UK) and Würzburg (Germany); and development centers in Ukraine. Since 2009 it has grown from a small group of passionate engineers and business managers to a full-scale mobi...
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. In his session at 15th Cloud Expo, Michael Meiner, an Engineering Director at Oracle, Corporation, analyzed a range of cloud offerings (IaaS, PaaS, SaaS) and discussed the benefits/challenges of migrating to each offe...
"We've just seen a huge influx of new partners coming into our ecosystem, and partners building unique offerings on top of our API set," explained Seth Bostock, Chief Executive Officer at IndependenceIT, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
Chuck Piluso presented a study of cloud adoption trends and the power and flexibility of IBM Power and Pureflex cloud solutions. Prior to Secure Infrastructure and Services, Mr. Piluso founded North American Telecommunication Corporation, a facilities-based Competitive Local Exchange Carrier licensed by the Public Service Commission in 10 states, serving as the company's chairman and president from 1997 to 2000. Between 1990 and 1997, Mr. Piluso served as chairman & founder of International Te...