Welcome!

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

Measuring Technical Debt | @DevOpsSummit #DevOps #APM #Monitoring

If technical debt were like monetary debt, it would be hard to keep track of it unless you checked in manually

Measuring Technical Debt with Incident Management Data
By Christopher Tozzi

If technical debt were like monetary debt, it would be hard to keep track of it unless you checked in manually. The only way many people find out their checking account is running out of funds is by logging in and checking the balance - or, worse, having a check bounce or a debit card declined.

But measuring technical debt can be more automatic. That's because, unlike your bank account, your IT infrastructure can be monitored on an ongoing basis with specialized tools, and you can get notified on critical health metrics. In turn, you can use monitoring data to gain information about technical debt. In other words, you don't have to do a manual audit to know when something is going awry in your data center. You don't have to wait for a server to go down before learning about a problem. Incident Management tools provide that information for you. By extension, they also offer a way for you to take stock of your technical debt without having to measure things tediously by hand.

Here's how incident management can help you keep track of technical debt and correct it, with no additional investment on your part.

Defining Technical Debt

First, let me explain what I mean by technical debt. Technical debt refers to imperfections in software code or architecture that, over the long term, create inefficiencies or other problems. Even if the imperfection itself is small, it can accrue a lot of "interest" over time as its effects repeat themselves on a continual basis.

For example, a program whose code contains multiple versions of the same functions, rather than adopting a modular approach, could take a few milliseconds longer to run than a better written program. That's not a big deal if you execute it once. But if it's a server-side web application that runs thousands of times a day, the debt adds up quickly in the form of poor performance and wasted CPU time.

Technical debt has lots of potential causes. Sometimes, you might knowingly acquire technical debt because you need to implement something quickly, you don't have time to follow best practices, and you decide that the debt is worth the cost (at that time at least). Other times, even the nit-pickiest of admins is hard-pressed to avoid technical debt. Unless you could see into the future (for instance, you probably didn't know that a decade-old switch that you are still using today because you can't afford to upgrade, would not work well with modern firewall tools). In that case, technical debt is just par for the course of living in an imperfect world.

Tracking Technical Debt
While technical debt has many sources, the nice thing about using incident management to measure it is that this approach makes it easy to track the problems no matter what caused them. Again, instead of doing a time-consuming manual audit of your systems to search for inefficiencies, you can leverage your incident management data as a proxy for assessing the extent of technical debt and honing in on it.

To understand how, let's take a look at some examples of different types of incident management data that PagerDuty tracks, and what it can reveal about your technical debt.

For starters, take the raw number of alerts that your tools generate. This is a very basic metric, and it can be affected by a number of factors. But assuming that your incident management reporting systems are properly configured and that you make no major change to your infrastructure, there is likely to be a relationship between the size of your technical debt and the number of incidents that your tools report. That's because more debt means poorer performance, which in turn triggers alerts when response times or resource levels hit certain thresholds. So a steady month-over-month decrease in the occurrence of alerts could mean that your technical debt is declining because your code has become more efficient.

Mean time to resolution (MTTR) is another incident management metric that offers a view into your technical debt. One common cause of poor MTTR is code that is overly complex. For instance, to reuse the example from above, code that was hastily written and contains redundant functions will be hard for an admin to understand quickly. That means a longer resolution time in the event that he has to read and change that code in order to respond to an incident.

The rate of escalations in your incident management data is also a useful measure of technical debt. Escalations occur when the first responder to an incident is not able to solve the problem and has to call in extra help. Frequent escalations likely mean one of two things. First, your admins may not be good at their jobs, but if that's the case, you would already know about it well before you review your incident management data. The second main cause of escalations is code that is too complex to be handled easily by whoever responds to an incident. If that's the kind of code your admins are dealing with when they answer alerts, there's a good chance the code was poorly written and is a source of technical debt.

Finding the Source of Technical Debt
Beyond helping you trace general trends regarding your technical debt, incident management data is also handy for zeroing in on the source of a problem.

For example, if your MTTR for incidents related to a certain program is higher than your average MTTR, there's a good chance the program in question is generating technical debt. Similarly, if servers running one type of operating system account for a disproportionate number of alerts, there's probably a code or configuration flaw at play. That's a technical debt you can address.

The cool thing about using incident management data to locate and address technical debt is that it doesn't require any significant amount of additional work. You already have monitoring systems in place, along with (hopefully) a central operations and reporting hub like PagerDuty. Taking advantage of these resources to find and fix technical debt doesn't require additional tools or investment. It helps you proactively make your code and operations more efficient, using the software you already have in place.

The post Measuring Technical Debt With Incident Management Data appeared first on PagerDuty.

Read the original blog entry...

More Stories By PagerDuty Blog

PagerDuty’s operations performance platform helps companies increase reliability. By connecting people, systems and data in a single view, PagerDuty delivers visibility and actionable intelligence across global operations for effective incident resolution management. PagerDuty has over 100 platform partners, and is trusted by Fortune 500 companies and startups alike, including Microsoft, National Instruments, Electronic Arts, Adobe, Rackspace, Etsy, Square and Github.

Latest Stories
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, Doug Vanderweide, an instructor at Linux Academy, discussed why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers wit...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devic...
"Loom is applying artificial intelligence and machine learning into the entire log analysis process, from start to finish and at the end you will get a human touch,” explained Sabo Taylor Diab, Vice President, Marketing at Loom Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Amazon started as an online bookseller 20 years ago. Since then, it has evolved into a technology juggernaut that has disrupted multiple markets and industries and touches many aspects of our lives. It is a relentless technology and business model innovator driving disruption throughout numerous ecosystems. Amazon’s AWS revenues alone are approaching $16B a year making it one of the largest IT companies in the world. With dominant offerings in Cloud, IoT, eCommerce, Big Data, AI, Digital Assista...
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities. In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, posited that disruption is inevitable for comp...
SYS-CON Events announced today that Cloud Academy named "Bronze Sponsor" of 21st International Cloud Expo which will take place October 31 - November 2, 2017 at the Santa Clara Convention Center in Santa Clara, CA. Cloud Academy is the industry’s most innovative, vendor-neutral cloud technology training platform. Cloud Academy provides continuous learning solutions for individuals and enterprise teams for Amazon Web Services, Microsoft Azure, Google Cloud Platform, and the most popular cloud com...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
"We are a monitoring company. We work with Salesforce, BBC, and quite a few other big logos. We basically provide monitoring for them, structure for their cloud services and we fit into the DevOps world" explained David Gildeh, Co-founder and CEO of Outlyer, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
When growing capacity and power in the data center, the architectural trade-offs between server scale-up vs. scale-out continue to be debated. Both approaches are valid: scale-out adds multiple, smaller servers running in a distributed computing model, while scale-up adds fewer, more powerful servers that are capable of running larger workloads. It’s worth noting that there are additional, unique advantages that scale-up architectures offer. One big advantage is large memory and compute capacity...
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
No hype cycles or predictions of zillions of things here. IoT is big. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, Associate Partner at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He discussed the evaluation of communication standards and IoT messaging protocols, data analytics considerations, edge-to-cloud tec...
What's the role of an IT self-service portal when you get to continuous delivery and Infrastructure as Code? This general session showed how to create the continuous delivery culture and eight accelerators for leading the change. Don Demcsak is a DevOps and Cloud Native Modernization Principal for Dell EMC based out of New Jersey. He is a former, long time, Microsoft Most Valuable Professional, specializing in building and architecting Application Delivery Pipelines for hybrid legacy, and cloud ...
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex software systems for startups and enterprises. Since 2009 it has grown from a small group of passionate engineers and business...