|By Lori MacVittie||
|October 25, 2014 02:00 PM EDT||
Kirk Byers at SDN Central writes frequently on the topic of DevOps as it relates (and applies) to the network and recently introduced a list of seven DevOps principles that are applicable in an article entitled, "DevOps and the Chaos Monkey. " On this list is the notion of reducing variation.
This caught my eye because reducing variation is a key goal of Six Sigma and in fact its entire formula is based on measuring the impact of variation in results. The thought is that by measuring deviation from a desired outcome, you can immediately recognize whether changes to a process improve the consistency of the outcome.Quality is achieved by reducing variation, or so the methodology goes.
This stems from Six Sigma's origins in lean manufacturing, where automation and standardization are commonly used to improve the quality of products produced, usually by reducing the number of defective products produced.
This is highly applicable to DevOps and the network, where errors are commonly cited as a significant contributor to lag in application deployment timelines caused by the need to troubleshoot such errors. It is easy enough to see the relationship: defective products are not all that much different than defective services, regardless of the cause of the defect.
Number four on Kirk's list addresses this point directly:
#4: Reduce variation.
Variation can be good in some contexts, but in the network, variation introduces unexpected errors and unexpected behaviors.
Whether you manage dozens, hundreds, or thousands of network devices, how much of your configuration can be standardized? Can you standardize the OS version? Can you minimize the number of models that you use? Can you minimize the number of vendors?
Variation increases network complexity, testing complexity, and the complexity of automation tools. It also increases the knowledge that engineers must possess.
Obviously, there are cost and functional trade-offs here, but reducing variation should at least be considered.
What Kirk is saying without saying, is that standardization improves consistency in the network. That's no surprise, as standardization is a key method of reducing operational overhead. Standardization (or "reducing variation" if you prefer) achieves this by addressing network complexity that contributes heavily to operational overhead and variation in outcome (aka errors).
That's because a key contributor to network complexity is the sheer number of boxes that make up the network and complicate topology. These boxes are provisioned and managed according to their unique paradigm, and thus increase the burden on operations and network teams by requiring familiarity with a large number of CLIs, GUIs and APIs. Standardization on a common platform relieves this burden by providing a common CLI, GUI and set of APIs that can be used to provision, manage and control critical services. The shift to a modularized architecture based on a standardized platform increases flexibility and the ability to rapidly introduce new services without incurring the additional operational overhead associated with new, single service solutions. It reduces variation in provisioning, configuration and management (aka architectural debt).
On the other hand, SDN tries to standardize network components through the use of common APIs, protocols, and policies. It seeks to reduce variation in interfaces and policy definitions so components comprising the data plane can be managed as if they were standardized. That's an important distinction, though one that's best left for another day to discuss. Suffice to say that standardization at the API or model layer can leave organizations with significantly reduced capabilities as standardization almost always commoditizes functions at the lowest common set of capabilities.
That is not to say that standardization at the API or protocol layer isn't beneficial. It certainly can and does reduce variation and introduce consistency. The key is to standardize on APIs or protocols that are supportive of the network services you need.
What's important is that standardization on a common service platform can also reduce variation and introduce consistency. Applying one or more standardization efforts should then, ostensibly, net higher benefits.
24Notion is full-service global creative digital marketing, technology and lifestyle agency that combines strategic ideas with customized tactical execution. With a broad understand of the art of traditional marketing, new media, communications and social influence, 24Notion uniquely understands how to connect your brand strategy with the right consumer. 24Notion ranked #12 on Corporate Social Responsibility - Book of List.
Oct. 21, 2016 04:15 PM EDT Reads: 1,459
Established in 1998, Calsoft is a leading software product engineering Services Company specializing in Storage, Networking, Virtualization and Cloud business verticals. Calsoft provides End-to-End Product Development, Quality Assurance Sustenance, Solution Engineering and Professional Services expertise to assist customers in achieving their product development and business goals. The company's deep domain knowledge of Storage, Virtualization, Networking and Cloud verticals helps in delivering ...
Oct. 21, 2016 04:15 PM EDT Reads: 902
Oct. 21, 2016 04:00 PM EDT Reads: 1,457
Oct. 21, 2016 04:00 PM EDT Reads: 2,670
Oct. 21, 2016 03:54 PM EDT
Oct. 21, 2016 03:31 PM EDT
Oct. 21, 2016 03:15 PM EDT Reads: 255
Oct. 21, 2016 03:00 PM EDT Reads: 4,353
Oct. 21, 2016 03:00 PM EDT Reads: 11,142
Oct. 21, 2016 02:30 PM EDT Reads: 1,399
Oct. 21, 2016 02:15 PM EDT Reads: 1,427
Oct. 21, 2016 02:15 PM EDT Reads: 2,196
Oct. 21, 2016 02:00 PM EDT Reads: 630
Oct. 21, 2016 02:00 PM EDT Reads: 6,778
Oct. 21, 2016 02:00 PM EDT Reads: 567