Blog Feed Post

Dynatrace on Dynatrace: Detecting Regressions in Continuous Performance Environments

In my previous blog I addressed how we use Dynatrace on Dynatrace in our Continuous Functional Testing Environment. During that same visit to our engineering lab in Linz, Austria I also spoke with Thomas Steinmaurer, Performance Architect for Dynatrace. He oversees our Continuous Performance Environment. Dynatrace builds are deployed daily. Different load patterns are constantly running simulating traffic of thousands of agents. For this purpose we wrote our own performance testing tool because we have some special requirements to simulate that type of load.

Like Stefan, who told me his story about how Dynatrace helps him to get faster feedback on continuous functional tests, Thomas told me how Dynatrace helps him detecting performance regressions introduced through code changes from build to build.

The following is a screenshot of a JIRA ticket he used to track a recent performance regression:

We use JIRA tickets to track detected performance regressions on key quality metrics. This one shows the issue Thomas explained to me

Comparing Performance Signatures across Builds

Via our automated provisioning layer “Cloud Control”, Thomas deploys a new build into his Continuous Performance Environment every day. He runs different load patterns throughout the day, e.g: one that simulates high volume of PurePath related data vs. one that simulates a very high volume of infrastructure metrics. Both have different performance characteristics on the Dynatrace Server and the database back end due to the way data it must be processed.

I asked Thomas: “So, how do you know a build has a performance regression?”. His answer was perfectly “phrased” when he showed me the following graph he pulled from Dynatrace:

Key Performance Metrics for a Service, Process or Host make up a “Performance Signature” which we can compare across builds.

The dashboard above shows basic performance metrics for a “custom service” – the Correlation Engine Periodic Worker. You can see the marked timeframes on Nov 16 and Nov 17. These are the results when he ran the Load Test that had less throughput but required higher processing, thus resulting in a higher response time. However, the change that was introduced in Build Nov 16, deployed on Nov 16 evening and running overnight, clearly shows a totally different “Performance Signature” of the build. What is a “Performance Signature”? I must admit it is a term that our partners from T-Systems coined when they created the Dynatrace AppMon Performance Signature Plugin for Jenkins. I hope they don’t mind my “borrowing” this term but I think it PERFECTLY explains what we have to do in a Continuous Performance Environment. We need to “quantify” the Performance of the Service or Application Under Test is to compare it from build to build — these might be different metrics or a combination of metrics depending on what you are testing. From my perspective, the basic metrics should always be Throughput, Response Time, and Failure Rate. Additionally, we can add to these resource consumption metrics such as CPU, Memory, Network and Disk. The combination of these provides a “Performance Signature” that shows you “How Efficient Your Service/Application” is when processing a certain quantity of work load!

Quick Validation through Code Revert

When Thomas saw that performance wasn’t good in that build, he and the component owner had to investigate code changes between Nov 15 and 16. They did a quick sanity check by simply reverting the changes and running another test to validate that a recent architectural refactoring in that component was responsible for it. Thanks to the deployment pipeline and Cloud Control that was an easy thing to do. This is where automation pays off. And, as you can see from the following graph, performance was back to normal, proving that the code change that came in through one revision caused that issue:

The Performance Signature of the reverted code change was back to normal behavior.

Root Cause Detection

Just reverting is obviously not the solution to the problem. As Dynatrace captured detailed code level performance data Thomas simply compared code execution of the before and after the code change showing him – as an architect – where the introduced performance regression was:

Comparing the Performance Hotspots on Code Level makes it easy to spot the difference between code changes and highlights the regression

Now if you know the architecture and the code of your services and applications well enough, you immediately understand the root cause when getting this type of data presented side by side. The problematic version was simply bypassing a cache layer and always requesting data from the back end data store. That regression caused the performance regression.

The importance of JMX Metrics

While having the data available that Dynatrace captures by instrumenting your application, in this situation, actually instrumenting another instance of Dynatrace, it is very often important to look at additional metrics that are exposed by the application. In the case of Dynatrace the architects expose a lot of key performance and throughput metrics through custom JMX Metrics. Thomas and his colleagues have all assured me that they wouldn’t want to live without these custom metrics. This is also why they keep an eye on them. Both Dynatrace AppMon and Dynatrace have native support for capturing and charting these as shown in the following screenshot from a custom chart that Thomas reviews while his tests are executing:

Key Performance and Throughput Metrics are exposed via JMX and charted with Dynatrace while system is under continuous load

Keeping track of Deployments and Test Executions

The last thing Thomas showed me is how he keeps track of the status in his continuous performance environment. He uses custom events that are supported by Dynatrace AppMon. Every time he starts or stops a load test, or when a new version gets deployed or when configuration/tuning settings are changed, he makes a REST Call to let Dynatrace know about that event. That makes it easier when analyzing data because you know about the actual environment configuration and load when analyzing the tests:

Custom Events can be sent to Dynatrace via a REST API. Makes it easier when analyzing test results as you implicitly know the configuration of the environment

The power of Continuous Testing and Monitoring

Continuous Performance Testing makes it possible to detect performance regressions much faster than in traditional load testing environments where you run large scale load tests at the end of a sprint or release. In combination with Monitoring you can easily compare the “Performance Signature” across builds and provide feedback to your engineers minutes or hours after they made a code check in.

Thanks again to Thomas for sharing this story. Keep using our own products. Keep innovating by automating these feedback loops we so desperately need!

If you want to test Dynatrace on your own get your SaaS Trial by signing up here.

The post Dynatrace on Dynatrace: Detecting Regressions in Continuous Performance Environments appeared first on Dynatrace blog – monitoring redefined.

Read the original blog entry...

More Stories By Dynatrace Blog

Building a revolutionary approach to software performance monitoring takes an extraordinary team. With decades of combined experience and an impressive history of disruptive innovation, that’s exactly what we ruxit has.

Get to know ruxit, and get to know the future of data analytics.

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...