|By Lori MacVittie||
|August 27, 2014 10:00 AM EDT||
Elasticity is hailed as one of the biggest benefits of cloud and software-defined architectures. It's more efficient than traditional scalability models that only went one direction: up. It's based on the premise that wasting money and resources all the time just to ensure capacity on a seasonal or periodic basis is not only unappealing, but unnecessary in the age of software-defined everything.
The problem is that scaling down is much, much harder than scaling up. Oh, not from the perspective of automation and orchestration. That is, as the kids say these days, easy peasy lemon squeezy. APIs have made the ability to add and remove resources simplicity itself. There isn't a load balancing service available today without this capability - at least not one that's worth having.
But if you peek behind the kimono, you're going to quickly find out that perhaps it isn't as easy peasy as it first appears. Scaling down requires a bit more finesse than scaling up, at least if you care about the end user experience.
Consider for a moment the process of scaling up (or out, as is more often the case). A certain threshold is reached that indicates a need for more capacity. More often than not this metric is based on connections, with each application instance able to handle an approximate number of connections. When that is reached, a new application instance (node) is launched, it's added to the load balanced pool, and voila! More capacity, more connections, more consumers.
At some point that same threshold is breached on the way back down. As capacity demand wanes, the total connection count (as measured by all connections across all instances) decreases. When that count crossed a pre-determined threshold, one instance is shut down, it's removed from the pool, and voila!
You've just disconnected a whole bunch of consumers - many of whom might have been in the middle of a transaction (you know, trying to give you money).
Needless to say, they aren't happy.
That's because many of the systems implementing elastic scale these days haven't been load balancing applications for nearly twenty years and didn't recognize the importance of graceful degradation, or quiescence.
Graceful degradation was used in the olden days (and still is today, to be fair) when initiating a maintenance cycle. The requirement was that a server (today an instance) needed to be shut down for maintenance but it could not disrupt current user sessions. The load balancer would be "told" to begin quiescing (or bleeding off) connections in preparation for downtime. The service would immediately stop sending new connections to that server (instance) while allowing existing connections to complete. In this way, consumers could complete their tasks and when no connections were left, the server would be shut down and maintenance could begin.
This is not a simple thing to achieve. The load balancing service must be intelligent enough to stop sending new connections to an instance but not interrupt existing connections. It must be able to manage both active and semi-active instances for the same application, which requires stateful management across all application instances.
In today's models, this means that elasticity must embrace a more graceful, elegant means of scaling down than just suddenly killing an instance and all its associated connections.
With more and more apps - both those aimed at employees and those designed for consumers - residing in cloud environments and managed service environments, it is critical to evaluate how such providers support elasticity. Methods that disrupt productivity or interrupt consumer transactions are hardly worth the few pennies saved by immediately shutting down an instance when a threshold is crossed.
It behooves emerging software-defined models and devops that plan on managing scale automatically to recognize that elasticity isn't just about responding to thresholds; it's about responding seamlessly - in both directions.
Oct. 25, 2016 12:30 AM EDT Reads: 926
Oct. 25, 2016 12:15 AM EDT Reads: 1,849
Oct. 25, 2016 12:00 AM EDT Reads: 4,172
Oct. 25, 2016 12:00 AM EDT Reads: 4,050
Oct. 24, 2016 11:45 PM EDT Reads: 1,972
Oct. 24, 2016 11:15 PM EDT Reads: 3,846
Oct. 24, 2016 11:00 PM EDT Reads: 2,011
Oct. 24, 2016 09:45 PM EDT Reads: 1,373
Oct. 24, 2016 08:45 PM EDT Reads: 1,040
Oct. 24, 2016 08:30 PM EDT Reads: 1,345
Oct. 24, 2016 08:15 PM EDT Reads: 2,610
Oct. 24, 2016 07:30 PM EDT Reads: 3,213
Oct. 24, 2016 07:15 PM EDT Reads: 1,059
Oct. 24, 2016 07:15 PM EDT Reads: 1,178
Oct. 24, 2016 06:15 PM EDT Reads: 4,774