Welcome!

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, Machine Learning , Agile Computing

@DevOpsSummit: Blog Post

Software Quality Metrics For Continuous Delivery | @DevOpsSummit [#DevOps]

Even small changes need to be tracked and their impact on overall software quality must be measured

Software Quality Metrics for Your Continuous Delivery Pipeline | Part I

How often do you deploy new software? Once a month, once a week or every hour? The more often you deploy the smaller your changes will be. That's good! Why? Because smaller changes tend to be less risky since it's easier to keep track of what has really changed. For developers, it's certainly easier to fix something you worked on three days ago than something you wrote last summer. An analogy from a recent conference talk from AutoScout24 is to think about your release like a container ship, and every one of your changes is a container on that ship:

Your next software release en route to meet its iceberg

If all you know is that you have a problem in one of our containers you'd have to unpack and check all of them. That doesn't seem to make sense for a ship, and neither does it for a release. But that's still what happens quite frequently when a deployment fails and all you get is "it didn't work." In contrast, if you were shipping just a couple of containers you would be able to replace your giant, slow-maneuvering vessel with something faster and more agile - and if you're looking for a problem, you'd only have to inspect a handful of containers. While adopting this practice in the shipping industry would be a rather costly approach, this is exactly what continuous delivery allows us to do: Deploy more often, get faster feedback, and fix problems faster.

A great example is Amazon, who shared their success metrics at Velocity:

Some impressive stats from Amazon showing the success of rapid continuous delivery

However - even small changes can have severe impacts. Examples?

  1. Heavy DOM Manipulations through JavaScript: Introduced through a "harmless" new JavaScript library for tracking link clicks
  2. Memory Leaks in Production: Introduced by a not well tested remote logging framework downloaded on GitHub
  3. Performance Impact of Exceptions in Ops: Ops and Dev did not follow the same deployment steps (lack of automation scripts) resulting in thousands of exceptions and maxes out CPU on all app servers

Extending Your Delivery Pipeline
Even small changes need to be tracked and their impact on overall software quality must be measured along the delivery pipeline so that your quality gates can stop even the smallest change from causing a huge issue. The three examples above could have been avoided when automatically looking at the following measures across the delivery pipeline and stopping the delivery when "architectural" regressions are detected:

  • The number of DOM manipulations
  • Memory usage or object churn rate per transaction
  • Number of exceptions, number of database queries or number of log entries.

In a series of blog posts I will introduce you to metrics that you have to measure along your pipeline to act as an additional quality measure mechanism in order to prevent problems listed above. It is important that:

  • Developers get these measurements in the commit stage
  • Automation Engineers need to measure them for the automated unit and integration tests
  • Performance Engineers add them to the load testing reports you do in staging
  • Operations verify how the real application behaves after a new deployment in production

For each metric I introduce, I'll explain why it is important to monitor it, which types of problems can be detected and how Developers, Testers and Operations can monitor these metrics. To ready more on this, click here for the full article.

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
Amazon is pursuing new markets and disrupting industries at an incredible pace. Almost every industry seems to be in its crosshairs. Companies and industries that once thought they were safe are now worried about being “Amazoned.”. The new watch word should be “Be afraid. Be very afraid.” In his session 21st Cloud Expo, Chris Kocher, a co-founder of Grey Heron, will address questions such as: What new areas is Amazon disrupting? How are they doing this? Where are they likely to go? What are th...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they b...
SYS-CON Events announced today that SkyScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SkyScale is a world-class provider of cloud-based, ultra-fast multi-GPU hardware platforms for lease to customers desiring the fastest performance available as a service anywhere in the world. SkyScale builds, configures, and manages dedicated systems strategically located in maximum-security...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using...
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
SYS-CON Events announced today that Daiya Industry will exhibit at the Japanese Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ruby Development Inc. builds new services in short period of time and provides a continuous support of those services based on Ruby on Rails. For more information, please visit https://github.com/RubyDevInc.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
As businesses evolve, they need technology that is simple to help them succeed today and flexible enough to help them build for tomorrow. Chrome is fit for the workplace of the future — providing a secure, consistent user experience across a range of devices that can be used anywhere. In her session at 21st Cloud Expo, Vidya Nagarajan, a Senior Product Manager at Google, will take a look at various options as to how ChromeOS can be leveraged to interact with people on the devices, and formats th...
First generation hyperconverged solutions have taken the data center by storm, rapidly proliferating in pockets everywhere to provide further consolidation of floor space and workloads. These first generation solutions are not without challenges, however. In his session at 21st Cloud Expo, Wes Talbert, a Principal Architect and results-driven enterprise sales leader at NetApp, will discuss how the HCI solution of tomorrow will integrate with the public cloud to deliver a quality hybrid cloud e...
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, will answer these questions and demonstrate techniques for implementing advanced scheduling. For example, using spot instances ...