Welcome!

Related Topics: Containers Expo Blog, @CloudExpo

Containers Expo Blog: Blog Post

Quit Stalling: Overcoming the Barriers to Virtualization Deployments

Establishing four key areas of virtualization management

There is a well-known and outstanding promise of virtualization, that it can and does deliver significant IT and business benefits, including:

  • Substantial ROI: In hardware consolidation, power, rent, cooling, downtime, etc.
  • Greater agility: With fast IT support for business innovation, transformation, etc.
  • Improved continuity: Through hardware redundancy, site recovery, live migration, etc.
  • And many other business values

However, with more and more data coming through showing enterprises struggling to accelerate conversion and maturity of virtualization deployments, it is clear that "outstanding" in this context carries a dual meaning - not just in the sense of fantastic outcomes, but also undelivered outcomes.

The Facts Don't Lie - Or Do They?
Actually, the raw figures for virtualization adoption can be very misleading. Every survey and study shows clearly that 75%, 85%, or even 95% of organizations are adopting server virtualization; more and more we see that these same high proportions are deploying virtualization for production applications; and we see the volume of new servers and new applications utilizing virtualization breaking well past the 50% range.

However, these stats do not tell the whole story.

What's missing is how and why virtualization deployments are actually stalling within a majority of enterprises. Typically as a virtualization deployment reaches around 30-40% of servers, IT is unable to scale up with the resources and processes that got them to that point. As a result, a virtualization deployment slows down or stops altogether. This is called "virtual stall" - the inability to overcome the "tipping points" needed to move the needle on virtualization maturity.

I have cited data throughout 2010 that shows this - such as the CDW Server Virtualization Life Cycle Report that showed only 34% of total server infrastructure consists of virtual servers; or the Forrester Research from May this year (conducted for CA) that showed just 30% of servers on average are virtualized.

Virtual Stall - Fact or Fiction
Even so, many people cannot believe that virtual stall exists.

The outstanding promise (and to be fair, the substantial success) of virtualization puts blinkers on even the most assiduous observers. They see deployments happening, and assume virtual stall is just representative of a point in time on the virtualization journey, and that we are moving the needle every day. They see organizations that are 60%, 70%, or even 80% virtual and assume that virtual stall is really just a myth. They see organizations in different geographies and assume that virtual stall is just a U.S. concern. They see virtual stall as entirely avoidable, "simply" by applying the right preparation and planning.

Unfortunately, the truth is that most organizations are not overcoming virtual stall; most organizations are stuck at much lower rates of virtualization; virtual stall does affect organizations from around the world; and organizations cannot (at the very least do not) always overcome it simply with better plans.

The proof is in how consistent the indicators are.

Here Come the Facts
For example, the CDW-G 2010 Government Virtualization Report in July 2010 showed that an average of just 40% of total government infrastructure consists of virtual servers. Research conducted in Europe by leading industry analyst Kuppinger Cole in November 2010 shows that only 34% of organizations have deployed server virtualization for more than 50% of their systems. A new study by Cisco released in December 2010 polled organizations in the United States, Europe and India, and two-thirds of respondents said that less than half of their environment is virtualized. Even a CA Technologies survey conducted at the November 2010 Gartner ITxpo conference in Europe - a sophisticated audience of mostly large enterprises with access to good planning advice, which one would expect to show much greater virtualization maturity - still showed over half of the attendee respondents are less than 50% virtualized.

What Causes Virtual Stall?
The causes are legion, and often hard to overcome, but they are not all difficult to identify. Some key reasons include:

  • Costs: Of new hardware (yes, virtualization often needs new hardware - servers, storage, support), or virtualization licenses (even though many are looking at free alternatives to VMware), OS and application licenses (see next bullet), staff resourcing and training, and more.
  • Vendor licensing: Oracle is often cited for not certifying its products on non-Oracle virtualization platforms, but others like Microsoft and many smaller vendors are also guilty.
  • Staffing: Staff with virtualization certifications cost more to start with; but more importantly, virtualization resources cannot scale to manage two, three, or four times the VMs, and still apply the higher-level skills needed to virtualize more complex and mission-critical applications.
  • Business resistance: This is proven to be an issue time and again, where business owners do not allow IT to virtualize ‘their' application, and resist any changes that could end up sharing ‘their' server with other departments.
  • Security and compliance: IT staff may dismiss business owners' fear over virtualization security and compliance, but issues do exist in specific new threats (e.g. blue pill, red pill) and more mundane vulnerabilities (poor functional isolation, privileged user access, auditability, etc.).
  • Poor visibility: As virtualization gets more complex, it becomes increasingly hard to locate servers, detect problems, isolate faults, diagnose causes, or determine remediation, because virtual servers are by definition abstracted, and both add and hide a layer of complexity.
  • Increased dynamics: Virtualization allows extreme workload mobility, but this is also a threat, as rapid motion introduces its own problems (security of collocation, lack of isolation, problem isolation challenges, migration loops, etc.) and additional complexity.
  • Tool sophistication: As virtualization gets more broadly deployed, you cannot just use the same old tools, as they lack virtualization-specific information and capabilities; IT must start to grapple with a lot more manual activity, or swap out their management tools.
  • Silos of control: In a physical world the old silos of control (servers, storage, networking, etc.) worked, if not well, then acceptably; in a virtual world, these barriers are broken down, so in addition to LOB politics, IT has to grapple with internal politics, and providing appropriate skills.

Of course, this is not an exhaustive list. Other issues include facilities constraints, lack of insight into available capacity, existing "VM sprawl," poor suitability of some applications, lack of support for internally developed applications, added complexity of heterogeneous environments, high utilization of some existing servers, and poor management processes.

How to Solve - or Avoid - Virtual Stall
Certainly there are no silver bullets. However, some solutions are easy to identify, even though they may not always be easy to implement. Four key areas that IT needs to address include:

  • Visibility: IT must implement technologies and processes to provide visibility into the whole environment (systems, applications, middleware, hosts, guests, networks, storage, etc.). This includes integrated capabilities for deep discovery and inventory recording, application and system dependency mapping, resource capacity and utilization recording, identification of workloads and users, detection of configuration settings and drift, and detecting data leaking and data loss. This will help to achieve (and prove) security, compliance, capacity, availability, response, and SLA objectives by allowing IT to align performance, security, resources, and compliance with business policy, and enable more streamlined operations activity (e.g., problem triage and response) even in a more dynamic environment, to provide line of business confidence and reduce costs.
  • Control: Beyond seeing the environment and its problems, IT must take control with new technologies and processes to govern virtual environments. This should include capabilities that are integrated and broadly accessible across IT silos, to manage replication, migration, and continuity; to restrict access for privileged users; to reduce or eliminate the "rogue" VM deployments that lead to VM sprawl; to continuously manage provisioning, capacity, performance, and configuration; and to control allocation of resources (facilities, servers, storage, software, licenses, etc.) according to business policy. This helps to reduce IT staff skill requirements and costs, diminish the impact of IT silos, manage rapid migrations more effectively, and provide sufficient controls to convince business owners to expand tier 1 applications.
  • Assurance: To truly provide guarantees to business owners, IT needs to provide assurance that service performance will meet response, continuity, security, compliance, audit, experience, and uptime SLAs. Solutions can do this by providing rich visibility into end-to-end infrastructure and application performance, including traffic and response times; by tracking events, tracing incidents, identifying symptoms, and isolating root causes; and above all, by executing rapid remediation actions in real time to not just correct problems but to prevent them. This is going to build more trust from business owners, by meeting (and even exceeding) their compliance, satisfaction, performance, and approval goals, while also reducing staff costs (on triage etc.).
  • Automation: To address staffing issues, plus a host of compliance, audit, error reduction, and other challenges, IT must look to automate more mundane tasks, connect "known good tasks" into repeatable processes, and execute "known good processes" automatically too. Automating processes for provisioning/deprovisioning, configuration detection, patch and lifecycle remediation, monitoring and alerting, problem tracking and remediation, and end-user self-service will reduce the skill burden on highly trained staff, provide built-in documentation even for complex processes, allow junior and less-trained staff to do more complex work, reduce or eliminate human errors, and add security through functional isolation and process auditability.

The Bottom Line
Enhancing your virtualization management maturity by implementing these technologies and processes will help to eliminate virtual stall. Solutions with support for virtual, physical, cloud, server, network, storage, application and security needs; across multiple heterogeneous virtualization platforms, technologies, and vendors; solving specific issues today, but still scalable to extend and leverage investment into a strategic solution; will help to overcome the virtualization "tipping points" that lead to virtual stall.

Of course, some elements are simply beyond IT's direct control (e.g., vendor licensing), while others are not even a question of technology (e.g., poor existing processes). Moreover, virtualization maturity is not just a question of how many VMs you have, or what your server-to-VM ration is - virtualization maturity is also a question of how well you use the VMs you have, how sophisticated the virtualization deployment is, and more.

Nevertheless, by establishing these four key areas of virtualization management - visibility, control, assurance, and automation - most organizations will in a much better position to beat virtual stall, and deliver on the true outstanding promise of virtualization.


CA Technologies provides solutions that deliver virtualization visibility, control, assurance, and automation. For more information on please visit http://ca.com/virtualization.

More Stories By Andi Mann

Andi Mann is vice president of Strategic Solutions at CA Technologies. With more than 20 years’ experience across four continents, he has deep expertise of enterprise software on cloud, mainframe, midrange, server and desktop systems. He has worked within IT departments for governments and corporations, from small businesses to global multi-nationals; with several large enterprise software vendors; and as a leading industry analyst advising enterprises, governments, and IT vendors – from startups to the worlds’ largest companies. Andi is a co-author of the popular handbook, ‘Visible Ops – Private Cloud’; he blogs at ‘Andi Mann – Übergeek’ (http://pleasediscuss.com/andimann), and tweets as @AndiMann.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
Recently, WebRTC has a lot of eyes from market. The use cases of WebRTC are expanding - video chat, online education, online health care etc. Not only for human-to-human communication, but also IoT use cases such as machine to human use cases can be seen recently. One of the typical use-case is remote camera monitoring. With WebRTC, people can have interoperability and flexibility for deploying monitoring service. However, the benefit of WebRTC for IoT is not only its convenience and interopera...
Evan Kirstel is an internationally recognized thought leader and social media influencer in IoT (#1 in 2017), Cloud, Data Security (2016), Health Tech (#9 in 2017), Digital Health (#6 in 2016), B2B Marketing (#5 in 2015), AI, Smart Home, Digital (2017), IIoT (#1 in 2017) and Telecom/Wireless/5G. His connections are a "Who's Who" in these technologies, He is in the top 10 most mentioned/re-tweeted by CMOs and CIOs (2016) and have been recently named 5th most influential B2B marketeer in the US. H...
Michael Maximilien, better known as max or Dr. Max, is a computer scientist with IBM. At IBM Research Triangle Park, he was a principal engineer for the worldwide industry point-of-sale standard: JavaPOS. At IBM Research, some highlights include pioneering research on semantic Web services, mashups, and cloud computing, and platform-as-a-service. He joined the IBM Cloud Labs in 2014 and works closely with Pivotal Inc., to help make the Cloud Found the best PaaS.
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Everything run by electricity will eventually be connected to the Internet. Get ahead of the Internet of Things revolution. In his session at @ThingsExpo, Akvelon expert and IoT industry leader Sergey Grebnov provided an educational dive into the world of managing your home, workplace and all the devices they contain with the power of machine-based AI and intelligent Bot services for a completely streamlined experience.
"This week we're really focusing on scalability, asset preservation and how do you back up to the cloud and in the cloud with object storage, which is really a new way of attacking dealing with your file, your blocked data, where you put it and how you access it," stated Jeff Greenwald, Senior Director of Market Development at HGST, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Creating replica copies to tolerate a certain number of failures is easy, but very expensive at cloud-scale. Conventional RAID has lower overhead, but it is limited in the number of failures it can tolerate. And the management is like herding cats (overseeing capacity, rebuilds, migrations, and degraded performance). In his general session at 18th Cloud Expo, Scott Cleland, Senior Director of Product Marketing for the HGST Cloud Infrastructure Business Unit, discussed how a new approach is neces...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abilit...
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Busine...