Welcome!

Blog Feed Post

Not All Virtual Servers are Created Equal

How to optimize compute resources in a heterogeneous environment using weight/ratio-based load balancing

Unless you’re starting from scratch your data center is full of physical servers of various and sundry sizes, colors, shapes, and compute resources. And even if you’re starting from scratch and you have beautiful racks of everything the same, it’s not likely to stay that way if for no other reason than, well, hardware moves on at an astonishing rate these days. So you’ve almost certainly got (or will have) a physically heterogeneous environment in terms of hardware compute resources.

When you’re scaling up servers – whether solely to assure availability or for capacity – you will end up with instances running on different weighted-slbservers. Or at least you’d better if availability is part of the equation. Now, in a traditional environment that would cause potential issues as one of the hallmarks of a highly available architecture is that if the primary server fails the secondary must be able to handle the load. All of it. In a virtualized environment that’s not necessarily the case as you may be able to simply bring up two or three instances with less capacity to meet demand if you have the physical resources available.

Here’s the catch: your infrastructure needs to understand the capacity of each server (physical or virtual) in order to maximize resources available. Specifically, the load balancing solution – whether a traditional “load balancer” or part of an application delivery controller – must be able to distribute requests based on what resources are available on any given instance. That means if an instance is running on a physical server with fewer total resources available than another instance, the instance with fewer resources should be used less frequently.

It is the fact that data centers are heterogeneous and comprised of myriad physical servers of varying capacity that makes it important for the folks architecting cloud environments to understand what’s going on “under the hood.” 


WEIGHTS and MEASURES

Not all servers are created equal, but if you’re consolidating and trying to eek out every last drop of CPU and RAM from your physical hardware to reduce capital expenditures you might need to get more creative with how you’re distributing your application load.

Using weighted or ratio-based load balancing in a heterogeneous environment offers the convenience and simplicity of traditional, simple load balancing algorithms with an eye toward balancing the differences inherent in physical hardware. Or, potentially in emerging data center models, the differences in virtual instances. Because the limitation on physical hardware necessitates limitations on virtual instances, particularly if there are ore than one instances of a virtual container running on the same hardware, it’s important for the solution that distributes request amongst application instances to understand that some have more headroom than others, as it were.

In load balancing there are a few “industry standard” old standby algorithms. These algorithms are often also implemented by application server clustering solutions, and should be fairly familiar to network and application folks alike: round robin, least F5 BIG-IP load balancing algorithms connections, and fastest response time.

In most load balancers there is also the addition of a ratio-based algorithm in which “weights” are assigned to each member of a (pool|farm|cluster). Requests are distributed to each member based on that weight, which is really used more like a percentage ratio than anything else.

Using a ratio-based load balancing algorithms in a virtual or cloud environment in which the hardware and/or virtual containers may have different resource ceilings affords architects the ability to better distribute requests according to the capacity and health of each instance. Without taking into consideration these physical limitations it is easy to overwhelm one system while leaving another idle, which runs contrary to the concept behind cloud computing and on-demand data centers in which every ounce of compute resources is used in order to meet capacity.

Obviously there are other (and perhaps better) ways to make decisions on distributing requests when the environment comprises instances of varying capacities and resources. A ratio-based load balancing algorithm is one of the simplest and easiest to implement, but affords a great deal better use of resources than does an algorithm that does not take the physical constraints of disparate systems into consideration.

 

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...