Click here to close now.


News Feed Item

Bare Metal Cloud: A Non-Virtualized Cloud Option for Performance-Sensitive Workloads





NEW YORK, Jan. 23, 2014 /PRNewswire/ -- announces that a new market research report is available in its catalogue:

Bare Metal Cloud: A Non-Virtualized Cloud Option for Performance-Sensitive Workloads

In this SPIE, Stratecast examines the concept of the bare metal cloud from the provider and the customer perspective. We compare benefits and challenges of bare metal cloud configurations with the more common virtualized cloud configurations. Finally, we look at the bare metal cloud offers from Internap and SoftLayer, an IBM company.


Does a cloud configuration require virtualization? It turns out, the answer is "no."

In fact, the National Institute of Standards and Technology (NIST), whose cloud definition is widely accepted in the industry, omits virtualization as a criteria for cloud. NIST's "essential characteristics" include on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service—but not virtualization.

This may surprise many in the IT community who have always assumed that a virtualized server infrastructure was necessary to provide the flexibility and scalability associated with cloud. However, the emergence of "bare metal" clouds—that is, clouds that do not utilize virtualization—is forcing a re-examination of what it takes to offer a cloud service. The bare metal options provide the flexibility and scalability associated with virtualized offers, while promising higher levels of performance and consistency. Currently, two cloud leaders—SoftLayer (an IBM company) and Internap—have developed a bare metal option as part of their cloud portfolios. Both tout their bare metal services as a way to differentiate themselves from the crowded cloud service market. Both have also had success in attracting cloud-skeptical businesses and performance-sensitive workloads that previously may not have been considered ideal for cloud deployment. In this SPIE, Stratecast examines the concept of the bare metal cloud from the provider and the customer perspective. We compare benefits and challenges of bare metal cloud configurations with the more common virtualized cloud configurations. Finally, we look at the bare metal cloud offers from Internap and SoftLayer, an IBM company.

Virtualization – The Value and the Cost

Server virtualization is well established in enterprise data centers and in hosting and cloud centers. More than half of businesses utilize server virtualization, according to the 2013 Stratecast | Frost & Sullivan Cloud User Survey.

Virtualization separates the logical from the physical components of the workload. Application code and associated operating system are packaged neatly into a virtual machine (VM). Multiple VMs, regardless of operating system, can share a physical server; a hypervisor installed on the server allocates resources and acts as a translator, making each VM believe it has full access to the server resources.

The virtualized workload is self-contained and highly portable. Like a turtle or a motor home, it carries all it needs on its back—operating system and application code—and isn't fussy about where it sets up housekeeping. Thus, IT technicians do not have to custom-configure a server exoskeleton for a virtualized workload.

As such, virtualization is associated with infrastructure conservation and flexibility. Top benefits of virtualization include:
• Deferral of capital expenses: By accommodating multiple virtualized workloads per physical server, virtualization optimizes server utilization, and reduces the need for additional servers or expanded floorspace.
• Faster time to deploy workloads: In a virtualized environment, VMs can be tested, deployed, spun down, and moved via a management console, without requiring on-site technicians to perform labor-intensive tasks to configure the servers. This rapid deployment reduces operating costs and decreases time to provision servers.
• Support for high availability environments: In a virtualized server environment, routine hardware maintenance or unexpected interruptions do not need to shut down applications. Because VMs are portable, they can be moved to another server, in house or outside, that has spare capacity.

The resulting conclusion from these generalized benefits is that virtualization technologies offer the greatest benefit to the infrastructure owner. By optimizing hardware utilization, deferring costs, and allowing for flexibility, virtualization allows infrastructure to be managed more efficiently, easily and cost-effectively.

But in a cloud environment, infrastructure responsibility falls to the cloud service provider, so those benefits of virtualization do not automatically accrue to the customer. The enterprise customer can benefit indirectly from vitualization if the provider chooses to pass on cost savings in the form of lower rates, for example. Nonetheless, in comparing the end-user experience or application performance, a virtualized workload offers no advantages over a non-virtualized workload.

In fact, virtualization comes at a cost to the user. For some workloads, virtualization can offer infrastructure efficiency for the cloud service provider, at the cost of diminished performance for the customer. Primary sources of concern are "noisy neighbor syndrome" and the "hypervisor tax."

Noisy Neighbor

As noted, virtualization is an excellent way to optimize use of server capacity. By loading multiple virtualized workloads on a shared physical server, overall resource utilization improves. However, the different applications are all contending for the same processor and memory resources, which inevitably brings the risk that the computing resource will not be available at the capacity level and at the instant it is needed. For many apps, the risk may be minimal—for example, if an internal intranet page loads slowly occasionally, employees will not go elsewhere. Also the performance impact is likely to be sporadic and unpredictable, occurring only on occasions when multiple apps attempt to access the shared resources simultaneously. However, for latency-sensitive applications such as e-commerce, gaming, and streaming media, any delay can be intolerable.

In a private data center, the enterprise can control the risks of resource contention by making decisions regarding assignment of VMs across available physical servers, monitoring and balancing loads as needed. However, that level of control is not possible for customers of a shared cloud, as only the provider has visibility across the entire, multi-tenant environment. In a shared cloud environment, customers have little control over where their VMs are loaded and which other customers' workloads are sharing the processor. Furthermore, like an airline overbooking flights to ensure full planes, the cloud service provider has an incentive to "oversubscribe" each physical server. The greater the resource utilization, more customers are served at a lower cost per customer.

For customers eager to avoid the "noisy neighbor" risk, many providers offer a hosted private cloud or virtualized private cloud option. In these services, the server hardware and, perhaps, other infrastructure components are dedicated to a single enterprise. Thus, the virtualized workloads that share physical server resources all belong to the same enterprise, giving the enterprise some control over capacity utilization.

Hypervisor Tax

Even if there are no strangers sharing the facility—for example, in a dedicated or private cloud environment—virtualization extracts a toll on available capacity. The "hypervisor tax" is the amount of processing capacity that is consumed by the hypervisor layer. While virtualization providers have enhanced their hypervisor software to be as thin as possible, a hypervisor can still consume as much as percent of the available capacity of a server. For high-performance workloads that require large amounts of capacity, the tax can be significant, even impacting performance of the application.
In addition, as with every additional software layer, the hypervisor layer subjects data to delay; minuscule amounts, to be sure, but noticeable for latency-sensitive workloads.

Thus, enterprises are faced with trade-offs in running their high-capacity or high-performance workloads in the cloud; that is, trade optimal performance for the efficiency and low cost structure of the virtualized cloud, or trade efficiency and low cost for high performance in a dedicated hosting environment.

But suppose enterprises had the choice of a low-cost, scalable, easily managed hosting option without virtualization? This is the operating principle behind the bare metal cloud.

Table of Contents


SPIE 2014 #2 - January 17/2014
1. Introduction
2. Virtualization - The Value and the Cost
3. Bare Metal Cloud
4. Challenges to Providing a Bare Metal Cloud Option
5. Bare Metal Cloud Services
6. Stratecast - The Last Word
7. About Stratecast
8. About Frost & Sullivan

To order this report: Bare Metal Cloud: A Non-Virtualized Cloud Option for Performance-Sensitive Workloads

Contact Clare: [email protected]
US: (339)-368-6001
Intl: +1 339-368-6001

SOURCE Reportlinker

More Stories By PR Newswire

Copyright © 2007 PR Newswire. All rights reserved. Republication or redistribution of PRNewswire content is expressly prohibited without the prior written consent of PRNewswire. PRNewswire shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon.

Latest Stories
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace. Traditional approaches for driving innovation are now woefully inadequate for keeping up with the breadth of disruption and change facin...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound...
With all the incredible momentum behind the Internet of Things (IoT) industry, it is easy to forget that not a single CEO wakes up and wonders if “my IoT is broken.” What they wonder is if they are making the right decisions to do all they can to increase revenue, decrease costs, and improve customer experience – effectively the same challenges they have always had in growing their business. The exciting thing about the IoT industry is now these decisions can be better, faster, and smarter. Now ...
PubNub has announced the release of BLOCKS, a set of customizable microservices that give developers a simple way to add code and deploy features for realtime apps.PubNub BLOCKS executes business logic directly on the data streaming through PubNub’s network without splitting it off to an intermediary server controlled by the customer. This revolutionary approach streamlines app development, reduces endpoint-to-endpoint latency, and allows apps to better leverage the enormous scalability of PubNu...
I recently attended and was a speaker at the 4th International Internet of @ThingsExpo at the Santa Clara Convention Center. I also had the opportunity to attend this event last year and I wrote a blog from that show talking about how the “Enterprise Impact of IoT” was a key theme of last year’s show. I was curious to see if the same theme would still resonate 365 days later and what, if any, changes I would see in the content presented.
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user e...
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem"...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessi...
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at, explored the value of Kibana 4 for log analysis and provided a hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He examined three use cases: IT operations, business intelligence, and security and compliance. Asaf Yigal is co-founder and VP of Product at log analytics software company In the past, he was co-founder of social-trading platform Currensee, which...
There are over 120 breakout sessions in all, with Keynotes, General Sessions, and Power Panels adding to three days of incredibly rich presentations and content. Join @ThingsExpo conference chair Roger Strukhoff (@IoT2040), June 7-9, 2016 in New York City, for three days of intense 'Internet of Things' discussion and focus, including Big Data's indespensable role in IoT, Smart Grids and Industrial Internet of Things, Wearables and Consumer IoT, as well as (new) IoT's use in Vertical Markets.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
Container technology is shaping the future of DevOps and it’s also changing the way organizations think about application development. With the rise of mobile applications in the enterprise, businesses are abandoning year-long development cycles and embracing technologies that enable rapid development and continuous deployment of apps. In his session at DevOps Summit, Kurt Collins, Developer Evangelist at, examined how Docker has evolved into a highly effective tool for application del...