Welcome!

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, Containers Expo Blog, Agile Computing, SDN Journal

@DevOpsSummit: Blog Post

DevOps & SDN | @DevOpsSummit [#DevOps]

Whether it's DevOps or SDN, a key goal is the reduction of variation (complexity) in the network

Kirk Byers at SDN Central writes frequently on the topic of DevOps as it relates (and applies) to the network and recently introduced a list of seven DevOps principles that are applicable in an article entitled, "DevOps and the Chaos Monkey. " On this list is the notion of reducing variation.

This caught my eye because reducing variation is a key goal of Six Sigma and in fact its entire formula is based on measuring the impact of variation in results. The thought is that by measuring deviation from a desired outcome, you can immediately recognize whether changes to a process improve the consistency of the outcome.Quality is achieved by reducing variation, or so the methodology goes.

six-sigma-with-legend

This stems from Six Sigma's origins in lean manufacturing, where automation and standardization are commonly used to improve the quality of products produced, usually by reducing the number of defective products produced.

This is highly applicable to DevOps and the network, where errors are commonly cited as a significant contributor to lag in application deployment timelines caused by the need to troubleshoot such errors. It is easy enough to see the relationship: defective products are not all that much different than defective services, regardless of the cause of the defect.

Number four on Kirk's list addresses this point directly:

#4: Reduce variation.

 

Variation can be good in some contexts, but in the network, variation introduces unexpected errors and unexpected behaviors.

Whether you manage dozens, hundreds, or thousands of network devices, how much of your configuration can be standardized? Can you standardize the OS version? Can you minimize the number of models that you use?   Can you minimize the number of vendors?

Variation increases network complexity, testing complexity, and the complexity of automation tools. It also increases the knowledge that engineers must possess.

Obviously, there are cost and functional trade-offs here, but reducing variation should at least be considered.

What Kirk is saying without saying, is that standardization improves consistency in the network. That's no surprise, as standardization is a key method of reducing operational overhead. Standardization (or "reducing variation" if you prefer) achieves this by addressing network complexity that contributes heavily to operational overhead and variation in outcome (aka errors).

That's because a key contributor to network complexity is the sheer number of boxes that make up the network and complicate topology. These boxes are provisioned and managed according to their unique paradigm, and thus increase the burden on operations and network teams by requiring familiarity with a large number of CLIs, GUIs and APIs. Standardization on a common platform relieves this burden by providing a common CLI, GUI and set of APIs that can be used to provision, manage and control critical services. The shift to a modularized architecture based on a standardized platform increases flexibility and the ability to rapidly introduce new services without incurring the additional operational overhead associated with new, single service solutions. It reduces variation in provisioning, configuration and management (aka architectural debt).

On the other hand, SDN tries to standardize network components through the use of common APIs, protocols, and policies. It seeks to reduce variation in interfaces and policy definitions so components comprising the data plane can be managed as if they were standardized. That's an important distinction, though one that's best left for another day to discuss. Suffice to say that standardization at the API or model layer can leave organizations with significantly reduced capabilities as standardization almost always commoditizes functions at the lowest common set of capabilities.

That is not to say that standardization at the API or protocol layer isn't beneficial. It certainly can and does reduce variation and introduce consistency. The key is to standardize on APIs or protocols that are supportive of the network services you need.

What's important is that standardization on a common service platform can also reduce variation and introduce consistency. Applying one or more standardization efforts should then, ostensibly, net higher benefits.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Onalytica. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Get deep visibility into the performance of your databases and expert advice for performance optimization and tuning. You can't get application performance without database performance. Give everyone on the team a comprehensive view of how every aspect of the system affects performance across SQL database operations, host server and OS, virtualization resources and storage I/O. Quickly find bottlenecks and troubleshoot complex problems.
IoT is rapidly changing the way enterprises are using data to improve business decision-making. In order to derive business value, organizations must unlock insights from the data gathered and then act on these. In their session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, and Peter Shashkin, Head of Development Department at EastBanc Technologies, discussed how one organization leveraged IoT, cloud technology and data analysis to improve customer experiences and effici...
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of Soli...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Regulatory requirements exist to promote the controlled sharing of information, while protecting the privacy and/or security of the information. Regulations for each type of information have their own set of rules, policies, and guidelines. Cloud Service Providers (CSP) are faced with increasing demand for services at decreasing prices. Demonstrating and maintaining compliance with regulations is a nontrivial task and doing so against numerous sets of regulatory requirements can be daunting task...
Successful digital transformation requires new organizational competencies and capabilities. Research tells us that the biggest impediment to successful transformation is human; consequently, the biggest enabler is a properly skilled and empowered workforce. In the digital age, new individual and collective competencies are required. In his session at 19th Cloud Expo, Bob Newhouse, CEO and founder of Agilitiv, drew together recent research and lessons learned from emerging and established compa...
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
Whether your IoT service is connecting cars, homes, appliances, wearable, cameras or other devices, one question hangs in the balance – how do you actually make money from this service? The ability to turn your IoT service into profit requires the ability to create a monetization strategy that is flexible, scalable and working for you in real-time. It must be a transparent, smoothly implemented strategy that all stakeholders – from customers to the board – will be able to understand and comprehe...