|By Lori MacVittie||
|September 29, 2015 06:00 AM EDT||
What DevOps Can Do About Cloud's Predictable Provisioning Problem
Go ahead. Name a cloud environment that doesn't include load balancing as the key enabler of elastic scalability. I've got coffee... so it's good, take your time...
Exactly. Load balancing - whether implemented as traditional high availability pairs or clustering - provides the means by which applications (and infrastructure, in many cases) scale horizontally. It is load balancing that is at the heart of elastic scalability models, and that provides a means to ensure availability and even improve performance of applications.
But simple load balancing alone isn't enough. Too many environments and architectures are wont to toss a simple, network-based solution at the problem and call it a day. But rudimentary load balancing techniques that rely solely on a set of metrics are doomed to fail eventually. That's because a simple number like "connection count" does not provide enough context to make an intelligent load balancing decision. An application instance may currently have only 100 connections while another has 500, but if the capacity of the former is only 200 while the capacity of the other is 5000, a decision based on "least connections" is not the right one.
Application-aware networking tells us that load balancing decisions - even rudimentary ones - should be made based on a variety of variables such as application load, response time, and capacity. That means a modern load balancing service capable of not just tracking these metrics but gathering them from the application instances under management.
In data centers, it is best practice to deploy application instances on similarly capable hardware. This is because doing so provides predictable capacity and performance that can be used to better scale an application and ensure compliance with service level expectations.
When moving to a cloud environment - whether public or private - this practice can be lost. In the public cloud, that's because you have no control over the underlying hardware capabilities - you can only specific the compute capabilities of an instance. In a private cloud, you have more control over this but may not have provisioning systems intelligent enough to provide the visibility you need to make a provisioning decision in real time.
That can lead to problems. Consider this nugget from a recent blog post:
One thing that I’ve learned is that you can end up on a variety of different hardware but they don’t always act the same. Stackdriver has been a great help with this. For example, if we’re firing up 6 web servers, Stackdriver can help us see that 5 are cruising along at 20% CPU, while one is at 50% CPU. It allows us to see and address that anomaly.
Let's assume, for a moment, this is true. Because it can be. Anyone who's ever dealt with hardware servers knows it's true - hardware, though matched in terms of basic capacity, can wind up performing differently. That's due to a number of things including the natural degradation of capacity over time due to "wear and tear" as well as the possibility of misconfiguration or the presence of some other artifact or code that may be eating up cycles.
In any case, the reason is not as important as the fact that this happens. It's important because we know operational axiom #2: as load increases, performance decreases. It also follows that as load increases, capacity decreases because, well, capacity and load go hand in hand.
Thus, in a cloud environment the aforementioned situation presents a problem: one of the "servers" is at a disadvantage and is not going to perform as well as the other five. Not only that, but its capacity as understood (and likely configured manually) by the load balancing is now inaccurate. The load balancing service believes all six servers have a capacity of X connections, but the reality is that a higher CPU utilization rate can reduce that.
A simple load balancing service is not going to adjust because it doesn't have the visibility or intelligence to make that connection. Whether the service is configured to use round robin (almost never a good idea) or a least connections (can be an acceptable choice if all other factors are predictable) algorithm, service levels are going to degrade unless the service is aware enough to recognize the discordance occurring.
Thus, we end up with a situation in which predictable performance and availability are, well, not necessarily predictable. Which introduces operational risk that must, somehow be countered.
Correcting for Unpredictable Provisioning
In enterprise-class data centers, application aware networking services are able to factor in not just connection counts and response times, but server load and a variety of other variables that can offset the unpredictability of provisioning processes. As noted earlier, application-aware load balancing services have the visibility and programmability necessary to monitor and measure the status of application instances and servers for a variety of metrics including CPU utilization (load).
What's perhaps even more interesting is that programmability enables extensibility of gathering and monitoring those statistics. If the application instance can present a variable which you deem critical for making load balancing decisions, programmability of the load balancing service makes it possible to incorporate that variable into its algorithm (or create a completely new one, if that's what it takes).
All these factors combine to answer the question, "Why does the network need to be dynamic?" or "Why do we need SD<insert preferred "N" or "DC" here>?"
Dynamic implies an ability to react in the face of unanticipated (unpredictable) situations. Unpredictable provisioning that can result in inconsistent capacity and performance has to be countered somewhere, and that somewhere is going to be upstream of the application instances exhibiting erratic behavior. Upstream is usually (and almost always in any of today's scalable architectures) an ADC or load balancing service.
That load balancing service must be application-aware and programmable if it's going to execute on its mission of maintaining performance and availability of applications in the face of the potentially unpredictable provisioning processes of cloud computing environments.
DevOps: More than just deployment
DevOps practitioners must become adept at not only understanding the complex relationships between performance and availability and capacity and load, but how to turn those business and operational expectations into reality by taking advantage of both application and network infrastructure capabilities.
DevOps isn't, after all, just about scripting and automation. Those are tools that enable devops practitioners to do something, and that something is more than just deploying apps - it's delivering them, too.
• • •
Excerpt from the State of APM Infographic courtesy of Germain Software, LLC.
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Mar. 24, 2017 04:15 AM EDT Reads: 1,202
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
Mar. 24, 2017 04:15 AM EDT Reads: 1,108
China Unicom exhibit at the 19th International Cloud Expo, which took place at the Santa Clara Convention Center in Santa Clara, CA, in November 2016. China United Network Communications Group Co. Ltd ("China Unicom") was officially established in 2009 on the basis of the merger of former China Netcom and former China Unicom. China Unicom mainly operates a full range of telecommunications services including mobile broadband (GSM, WCDMA, LTE FDD, TD-LTE), fixed-line broadband, ICT, data communica...
Mar. 24, 2017 03:30 AM EDT Reads: 3,154
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, discussed how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a practic...
Mar. 24, 2017 03:15 AM EDT Reads: 7,913
In their Live Hack” presentation at 17th Cloud Expo, Stephen Coty and Paul Fletcher, Chief Security Evangelists at Alert Logic, provided the audience with a chance to see a live demonstration of the common tools cyber attackers use to attack cloud and traditional IT systems. This “Live Hack” used open source attack tools that are free and available for download by anybody. Attendees learned where to find and how to operate these tools for the purpose of testing their own IT infrastructure. The...
Mar. 24, 2017 02:15 AM EDT Reads: 7,031
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
Mar. 24, 2017 01:45 AM EDT Reads: 1,231
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex softw...
Mar. 24, 2017 01:30 AM EDT Reads: 3,483
SYS-CON Events announced today that Loom Systems will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Founded in 2015, Loom Systems delivers an advanced AI solution to predict and prevent problems in the digital business. Loom stands alone in the industry as an AI analysis platform requiring no prior math knowledge from operators, leveraging the existing staff to succeed in the digital era. With offices in S...
Mar. 23, 2017 11:30 PM EDT Reads: 726
SYS-CON Events announced today that Cloud Academy will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloud Academy is the industry’s most innovative, vendor-neutral cloud technology training platform. Cloud Academy provides continuous learning solutions for individuals and enterprise teams for Amazon Web Services, Microsoft Azure, Google Cloud Platform, and the most popular cloud computing technologies. Ge...
Mar. 23, 2017 11:30 PM EDT Reads: 4,359
Historically, some banking activities such as trading have been relying heavily on analytics and cutting edge algorithmic tools. The coming of age of powerful data analytics solutions combined with the development of intelligent algorithms have created new opportunities for financial institutions. In his session at 20th Cloud Expo, Sebastien Meunier, Head of Digital for North America at Chappuis Halder & Co., will discuss how these tools can be leveraged to develop a lasting competitive advanta...
Mar. 23, 2017 10:45 PM EDT Reads: 2,392
"My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Mar. 23, 2017 10:15 PM EDT Reads: 3,277
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Mar. 23, 2017 10:00 PM EDT Reads: 3,431
With billions of sensors deployed worldwide, the amount of machine-generated data will soon exceed what our networks can handle. But consumers and businesses will expect seamless experiences and real-time responsiveness. What does this mean for IoT devices and the infrastructure that supports them? More of the data will need to be handled at - or closer to - the devices themselves.
Mar. 23, 2017 07:15 PM EDT Reads: 4,307
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Mar. 23, 2017 06:00 PM EDT Reads: 1,336
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
Mar. 23, 2017 04:15 PM EDT Reads: 2,477