|By Andreas Grabner||
|October 5, 2014 05:30 PM EDT||
Software Quality Metrics for Your Continuous Delivery Pipeline | Part I
How often do you deploy new software? Once a month, once a week or every hour? The more often you deploy the smaller your changes will be. That's good! Why? Because smaller changes tend to be less risky since it's easier to keep track of what has really changed. For developers, it's certainly easier to fix something you worked on three days ago than something you wrote last summer. An analogy from a recent conference talk from AutoScout24 is to think about your release like a container ship, and every one of your changes is a container on that ship:
Your next software release en route to meet its iceberg
If all you know is that you have a problem in one of our containers you'd have to unpack and check all of them. That doesn't seem to make sense for a ship, and neither does it for a release. But that's still what happens quite frequently when a deployment fails and all you get is "it didn't work." In contrast, if you were shipping just a couple of containers you would be able to replace your giant, slow-maneuvering vessel with something faster and more agile - and if you're looking for a problem, you'd only have to inspect a handful of containers. While adopting this practice in the shipping industry would be a rather costly approach, this is exactly what continuous delivery allows us to do: Deploy more often, get faster feedback, and fix problems faster.
A great example is Amazon, who shared their success metrics at Velocity:
Some impressive stats from Amazon showing the success of rapid continuous delivery
However - even small changes can have severe impacts. Examples?
- Memory Leaks in Production: Introduced by a not well tested remote logging framework downloaded on GitHub
- Performance Impact of Exceptions in Ops: Ops and Dev did not follow the same deployment steps (lack of automation scripts) resulting in thousands of exceptions and maxes out CPU on all app servers
Extending Your Delivery Pipeline
Even small changes need to be tracked and their impact on overall software quality must be measured along the delivery pipeline so that your quality gates can stop even the smallest change from causing a huge issue. The three examples above could have been avoided when automatically looking at the following measures across the delivery pipeline and stopping the delivery when "architectural" regressions are detected:
- The number of DOM manipulations
- Memory usage or object churn rate per transaction
- Number of exceptions, number of database queries or number of log entries.
In a series of blog posts I will introduce you to metrics that you have to measure along your pipeline to act as an additional quality measure mechanism in order to prevent problems listed above. It is important that:
- Developers get these measurements in the commit stage
- Automation Engineers need to measure them for the automated unit and integration tests
- Performance Engineers add them to the load testing reports you do in staging
- Operations verify how the real application behaves after a new deployment in production
For each metric I introduce, I'll explain why it is important to monitor it, which types of problems can be detected and how Developers, Testers and Operations can monitor these metrics. To ready more on this, click here for the full article.
Dec. 5, 2016 11:38 AM EST
Dec. 5, 2016 11:30 AM EST Reads: 755
Dec. 5, 2016 11:30 AM EST Reads: 742
Dec. 5, 2016 11:15 AM EST Reads: 921
Dec. 5, 2016 11:00 AM EST Reads: 655
Dec. 5, 2016 10:30 AM EST Reads: 614
Dec. 5, 2016 10:30 AM EST Reads: 235
Dec. 5, 2016 10:15 AM EST Reads: 949
Dec. 5, 2016 09:15 AM EST Reads: 1,413
Dec. 5, 2016 09:15 AM EST Reads: 884
Dec. 5, 2016 08:45 AM EST Reads: 819
Dec. 5, 2016 07:30 AM EST Reads: 7,053
Dec. 5, 2016 07:30 AM EST Reads: 995
Dec. 5, 2016 07:15 AM EST Reads: 1,291
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...
Dec. 5, 2016 06:45 AM EST Reads: 986