|By PR Newswire||
|January 23, 2014 08:20 AM EST||
NEW YORK, Jan. 23, 2014 /PRNewswire/ -- Reportlinker.com announces that a new market research report is available in its catalogue:
Bare Metal Cloud: A Non-Virtualized Cloud Option for Performance-Sensitive Workloads
In this SPIE, Stratecast examines the concept of the bare metal cloud from the provider and the customer perspective. We compare benefits and challenges of bare metal cloud configurations with the more common virtualized cloud configurations. Finally, we look at the bare metal cloud offers from Internap and SoftLayer, an IBM company.
Does a cloud configuration require virtualization? It turns out, the answer is "no."
In fact, the National Institute of Standards and Technology (NIST), whose cloud definition is widely accepted in the industry, omits virtualization as a criteria for cloud. NIST's "essential characteristics" include on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service—but not virtualization.
This may surprise many in the IT community who have always assumed that a virtualized server infrastructure was necessary to provide the flexibility and scalability associated with cloud. However, the emergence of "bare metal" clouds—that is, clouds that do not utilize virtualization—is forcing a re-examination of what it takes to offer a cloud service. The bare metal options provide the flexibility and scalability associated with virtualized offers, while promising higher levels of performance and consistency. Currently, two cloud leaders—SoftLayer (an IBM company) and Internap—have developed a bare metal option as part of their cloud portfolios. Both tout their bare metal services as a way to differentiate themselves from the crowded cloud service market. Both have also had success in attracting cloud-skeptical businesses and performance-sensitive workloads that previously may not have been considered ideal for cloud deployment. In this SPIE, Stratecast examines the concept of the bare metal cloud from the provider and the customer perspective. We compare benefits and challenges of bare metal cloud configurations with the more common virtualized cloud configurations. Finally, we look at the bare metal cloud offers from Internap and SoftLayer, an IBM company.
Virtualization – The Value and the Cost
Server virtualization is well established in enterprise data centers and in hosting and cloud centers. More than half of businesses utilize server virtualization, according to the 2013 Stratecast | Frost & Sullivan Cloud User Survey.
Virtualization separates the logical from the physical components of the workload. Application code and associated operating system are packaged neatly into a virtual machine (VM). Multiple VMs, regardless of operating system, can share a physical server; a hypervisor installed on the server allocates resources and acts as a translator, making each VM believe it has full access to the server resources.
The virtualized workload is self-contained and highly portable. Like a turtle or a motor home, it carries all it needs on its back—operating system and application code—and isn't fussy about where it sets up housekeeping. Thus, IT technicians do not have to custom-configure a server exoskeleton for a virtualized workload.
As such, virtualization is associated with infrastructure conservation and flexibility. Top benefits of virtualization include:
• Deferral of capital expenses: By accommodating multiple virtualized workloads per physical server, virtualization optimizes server utilization, and reduces the need for additional servers or expanded floorspace.
• Faster time to deploy workloads: In a virtualized environment, VMs can be tested, deployed, spun down, and moved via a management console, without requiring on-site technicians to perform labor-intensive tasks to configure the servers. This rapid deployment reduces operating costs and decreases time to provision servers.
• Support for high availability environments: In a virtualized server environment, routine hardware maintenance or unexpected interruptions do not need to shut down applications. Because VMs are portable, they can be moved to another server, in house or outside, that has spare capacity.
The resulting conclusion from these generalized benefits is that virtualization technologies offer the greatest benefit to the infrastructure owner. By optimizing hardware utilization, deferring costs, and allowing for flexibility, virtualization allows infrastructure to be managed more efficiently, easily and cost-effectively.
But in a cloud environment, infrastructure responsibility falls to the cloud service provider, so those benefits of virtualization do not automatically accrue to the customer. The enterprise customer can benefit indirectly from vitualization if the provider chooses to pass on cost savings in the form of lower rates, for example. Nonetheless, in comparing the end-user experience or application performance, a virtualized workload offers no advantages over a non-virtualized workload.
In fact, virtualization comes at a cost to the user. For some workloads, virtualization can offer infrastructure efficiency for the cloud service provider, at the cost of diminished performance for the customer. Primary sources of concern are "noisy neighbor syndrome" and the "hypervisor tax."
As noted, virtualization is an excellent way to optimize use of server capacity. By loading multiple virtualized workloads on a shared physical server, overall resource utilization improves. However, the different applications are all contending for the same processor and memory resources, which inevitably brings the risk that the computing resource will not be available at the capacity level and at the instant it is needed. For many apps, the risk may be minimal—for example, if an internal intranet page loads slowly occasionally, employees will not go elsewhere. Also the performance impact is likely to be sporadic and unpredictable, occurring only on occasions when multiple apps attempt to access the shared resources simultaneously. However, for latency-sensitive applications such as e-commerce, gaming, and streaming media, any delay can be intolerable.
In a private data center, the enterprise can control the risks of resource contention by making decisions regarding assignment of VMs across available physical servers, monitoring and balancing loads as needed. However, that level of control is not possible for customers of a shared cloud, as only the provider has visibility across the entire, multi-tenant environment. In a shared cloud environment, customers have little control over where their VMs are loaded and which other customers' workloads are sharing the processor. Furthermore, like an airline overbooking flights to ensure full planes, the cloud service provider has an incentive to "oversubscribe" each physical server. The greater the resource utilization, more customers are served at a lower cost per customer.
For customers eager to avoid the "noisy neighbor" risk, many providers offer a hosted private cloud or virtualized private cloud option. In these services, the server hardware and, perhaps, other infrastructure components are dedicated to a single enterprise. Thus, the virtualized workloads that share physical server resources all belong to the same enterprise, giving the enterprise some control over capacity utilization.
Even if there are no strangers sharing the facility—for example, in a dedicated or private cloud environment—virtualization extracts a toll on available capacity. The "hypervisor tax" is the amount of processing capacity that is consumed by the hypervisor layer. While virtualization providers have enhanced their hypervisor software to be as thin as possible, a hypervisor can still consume as much as percent of the available capacity of a server. For high-performance workloads that require large amounts of capacity, the tax can be significant, even impacting performance of the application.
In addition, as with every additional software layer, the hypervisor layer subjects data to delay; minuscule amounts, to be sure, but noticeable for latency-sensitive workloads.
Thus, enterprises are faced with trade-offs in running their high-capacity or high-performance workloads in the cloud; that is, trade optimal performance for the efficiency and low cost structure of the virtualized cloud, or trade efficiency and low cost for high performance in a dedicated hosting environment.
But suppose enterprises had the choice of a low-cost, scalable, easily managed hosting option without virtualization? This is the operating principle behind the bare metal cloud.
Table of Contents
1 | BARE METAL CLOUD: A NON-VIRTUALIZED CLOUD OPTION FOR PERFORMANCE-SENSITIVE WORKLOADS
SPIE 2014 #2 - January 17/2014
2. Virtualization - The Value and the Cost
3. Bare Metal Cloud
4. Challenges to Providing a Bare Metal Cloud Option
5. Bare Metal Cloud Services
6. Stratecast - The Last Word
7. About Stratecast
8. About Frost & Sullivan
To order this report: Bare Metal Cloud: A Non-Virtualized Cloud Option for Performance-Sensitive Workloads
Contact Clare: [email protected]
Intl: +1 339-368-6001
We're entering the post-smartphone era, where wearable gadgets from watches and fitness bands to glasses and health aids will power the next technological revolution. With mass adoption of wearable devices comes a new data ecosystem that must be protected. Wearables open new pathways that facilitate the tracking, sharing and storing of consumers’ personal health, location and daily activity data. Consumers have some idea of the data these devices capture, but most don’t realize how revealing and...
Dec. 10, 2016 04:00 AM EST Reads: 5,311
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.
Dec. 10, 2016 04:00 AM EST Reads: 5,512
Dec. 10, 2016 03:15 AM EST Reads: 408
Dec. 10, 2016 02:45 AM EST Reads: 2,266
Dec. 10, 2016 02:15 AM EST Reads: 787
Dec. 10, 2016 02:00 AM EST Reads: 585
Dec. 10, 2016 02:00 AM EST Reads: 1,986
Dec. 10, 2016 01:30 AM EST Reads: 3,998
Dec. 10, 2016 01:30 AM EST Reads: 763
Dec. 10, 2016 01:15 AM EST Reads: 1,220
Dec. 10, 2016 01:00 AM EST Reads: 1,263
Dec. 10, 2016 12:45 AM EST Reads: 476
Dec. 10, 2016 12:30 AM EST Reads: 614
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 9, 2016 11:30 PM EST Reads: 1,155
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
Dec. 9, 2016 10:45 PM EST Reads: 552