Blog Feed Post

Dynatrace on Dynatrace: Detecting Regressions in Continuous Performance Environments

In my previous blog I addressed how we use Dynatrace on Dynatrace in our Continuous Functional Testing Environment. During that same visit to our engineering lab in Linz, Austria I also spoke with Thomas Steinmaurer, Performance Architect for Dynatrace. He oversees our Continuous Performance Environment. Dynatrace builds are deployed daily. Different load patterns are constantly running simulating traffic of thousands of agents. For this purpose we wrote our own performance testing tool because we have some special requirements to simulate that type of load.

Like Stefan, who told me his story about how Dynatrace helps him to get faster feedback on continuous functional tests, Thomas told me how Dynatrace helps him detecting performance regressions introduced through code changes from build to build.

The following is a screenshot of a JIRA ticket he used to track a recent performance regression:

We use JIRA tickets to track detected performance regressions on key quality metrics. This one shows the issue Thomas explained to me

Comparing Performance Signatures across Builds

Via our automated provisioning layer “Cloud Control”, Thomas deploys a new build into his Continuous Performance Environment every day. He runs different load patterns throughout the day, e.g: one that simulates high volume of PurePath related data vs. one that simulates a very high volume of infrastructure metrics. Both have different performance characteristics on the Dynatrace Server and the database back end due to the way data it must be processed.

I asked Thomas: “So, how do you know a build has a performance regression?”. His answer was perfectly “phrased” when he showed me the following graph he pulled from Dynatrace:

Key Performance Metrics for a Service, Process or Host make up a “Performance Signature” which we can compare across builds.

The dashboard above shows basic performance metrics for a “custom service” – the Correlation Engine Periodic Worker. You can see the marked timeframes on Nov 16 and Nov 17. These are the results when he ran the Load Test that had less throughput but required higher processing, thus resulting in a higher response time. However, the change that was introduced in Build Nov 16, deployed on Nov 16 evening and running overnight, clearly shows a totally different “Performance Signature” of the build. What is a “Performance Signature”? I must admit it is a term that our partners from T-Systems coined when they created the Dynatrace AppMon Performance Signature Plugin for Jenkins. I hope they don’t mind my “borrowing” this term but I think it PERFECTLY explains what we have to do in a Continuous Performance Environment. We need to “quantify” the Performance of the Service or Application Under Test is to compare it from build to build — these might be different metrics or a combination of metrics depending on what you are testing. From my perspective, the basic metrics should always be Throughput, Response Time, and Failure Rate. Additionally, we can add to these resource consumption metrics such as CPU, Memory, Network and Disk. The combination of these provides a “Performance Signature” that shows you “How Efficient Your Service/Application” is when processing a certain quantity of work load!

Quick Validation through Code Revert

When Thomas saw that performance wasn’t good in that build, he and the component owner had to investigate code changes between Nov 15 and 16. They did a quick sanity check by simply reverting the changes and running another test to validate that a recent architectural refactoring in that component was responsible for it. Thanks to the deployment pipeline and Cloud Control that was an easy thing to do. This is where automation pays off. And, as you can see from the following graph, performance was back to normal, proving that the code change that came in through one revision caused that issue:

The Performance Signature of the reverted code change was back to normal behavior.

Root Cause Detection

Just reverting is obviously not the solution to the problem. As Dynatrace captured detailed code level performance data Thomas simply compared code execution of the before and after the code change showing him – as an architect – where the introduced performance regression was:

Comparing the Performance Hotspots on Code Level makes it easy to spot the difference between code changes and highlights the regression

Now if you know the architecture and the code of your services and applications well enough, you immediately understand the root cause when getting this type of data presented side by side. The problematic version was simply bypassing a cache layer and always requesting data from the back end data store. That regression caused the performance regression.

The importance of JMX Metrics

While having the data available that Dynatrace captures by instrumenting your application, in this situation, actually instrumenting another instance of Dynatrace, it is very often important to look at additional metrics that are exposed by the application. In the case of Dynatrace the architects expose a lot of key performance and throughput metrics through custom JMX Metrics. Thomas and his colleagues have all assured me that they wouldn’t want to live without these custom metrics. This is also why they keep an eye on them. Both Dynatrace AppMon and Dynatrace have native support for capturing and charting these as shown in the following screenshot from a custom chart that Thomas reviews while his tests are executing:

Key Performance and Throughput Metrics are exposed via JMX and charted with Dynatrace while system is under continuous load

Keeping track of Deployments and Test Executions

The last thing Thomas showed me is how he keeps track of the status in his continuous performance environment. He uses custom events that are supported by Dynatrace AppMon. Every time he starts or stops a load test, or when a new version gets deployed or when configuration/tuning settings are changed, he makes a REST Call to let Dynatrace know about that event. That makes it easier when analyzing data because you know about the actual environment configuration and load when analyzing the tests:

Custom Events can be sent to Dynatrace via a REST API. Makes it easier when analyzing test results as you implicitly know the configuration of the environment

The power of Continuous Testing and Monitoring

Continuous Performance Testing makes it possible to detect performance regressions much faster than in traditional load testing environments where you run large scale load tests at the end of a sprint or release. In combination with Monitoring you can easily compare the “Performance Signature” across builds and provide feedback to your engineers minutes or hours after they made a code check in.

Thanks again to Thomas for sharing this story. Keep using our own products. Keep innovating by automating these feedback loops we so desperately need!

If you want to test Dynatrace on your own get your SaaS Trial by signing up here.

The post Dynatrace on Dynatrace: Detecting Regressions in Continuous Performance Environments appeared first on Dynatrace blog – monitoring redefined.

Read the original blog entry...

More Stories By Dynatrace Blog

Building a revolutionary approach to software performance monitoring takes an extraordinary team. With decades of combined experience and an impressive history of disruptive innovation, that’s exactly what we ruxit has.

Get to know ruxit, and get to know the future of data analytics.

Latest Stories
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Analytic. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in compute, storage and networking technologies, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
Amazon has gradually rolled out parts of its IoT offerings in the last year, but these are just the tip of the iceberg. In addition to optimizing their back-end AWS offerings, Amazon is laying the ground work to be a major force in IoT – especially in the connected home and office. Amazon is extending its reach by building on its dominant Cloud IoT platform, its Dash Button strategy, recently announced Replenishment Services, the Echo/Alexa voice recognition control platform, the 6-7 strategic...
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
Everyone wants to use containers, but monitoring containers is hard. New ephemeral architecture introduces new challenges in how monitoring tools need to monitor and visualize containers, so your team can make sense of everything. In his session at @DevOpsSummit, David Gildeh, co-founder and CEO of Outlyer, will go through the challenges and show there is light at the end of the tunnel if you use the right tools and understand what you need to be monitoring to successfully use containers in your...
Developers want to create better apps faster. Static clouds are giving way to scalable systems, with dynamic resource allocation and application monitoring. You won't hear that chant from users on any picket line, but helping developers to create better apps faster is the mission of Lee Atchison, principal cloud architect and advocate at New Relic Inc., based in San Francisco. His singular job is to understand and drive the industry in the areas of cloud architecture, microservices, scalability ...
Data is an unusual currency; it is not restricted by the same transactional limitations as money or people. In fact, the more that you leverage your data across multiple business use cases, the more valuable it becomes to the organization. And the same can be said about the organization’s analytics. In his session at 19th Cloud Expo, Bill Schmarzo, CTO for the Big Data Practice at Dell EMC, introduced a methodology for capturing, enriching and sharing data (and analytics) across the organization...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
Grape Up is a software company, specialized in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the USA and Europe, we work with a variety of customers from emerging startups to Fortune 1000 companies.
Financial Technology has become a topic of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 20th Cloud Expo at the Javits Center in New York, June 6-8, 2017, will find fresh new content in a new track called FinTech.
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add...