Welcome!

Related Topics: Containers Expo Blog, @CloudExpo

Containers Expo Blog: Blog Post

Quit Stalling: Overcoming the Barriers to Virtualization Deployments

Establishing four key areas of virtualization management

There is a well-known and outstanding promise of virtualization, that it can and does deliver significant IT and business benefits, including:

  • Substantial ROI: In hardware consolidation, power, rent, cooling, downtime, etc.
  • Greater agility: With fast IT support for business innovation, transformation, etc.
  • Improved continuity: Through hardware redundancy, site recovery, live migration, etc.
  • And many other business values

However, with more and more data coming through showing enterprises struggling to accelerate conversion and maturity of virtualization deployments, it is clear that "outstanding" in this context carries a dual meaning - not just in the sense of fantastic outcomes, but also undelivered outcomes.

The Facts Don't Lie - Or Do They?
Actually, the raw figures for virtualization adoption can be very misleading. Every survey and study shows clearly that 75%, 85%, or even 95% of organizations are adopting server virtualization; more and more we see that these same high proportions are deploying virtualization for production applications; and we see the volume of new servers and new applications utilizing virtualization breaking well past the 50% range.

However, these stats do not tell the whole story.

What's missing is how and why virtualization deployments are actually stalling within a majority of enterprises. Typically as a virtualization deployment reaches around 30-40% of servers, IT is unable to scale up with the resources and processes that got them to that point. As a result, a virtualization deployment slows down or stops altogether. This is called "virtual stall" - the inability to overcome the "tipping points" needed to move the needle on virtualization maturity.

I have cited data throughout 2010 that shows this - such as the CDW Server Virtualization Life Cycle Report that showed only 34% of total server infrastructure consists of virtual servers; or the Forrester Research from May this year (conducted for CA) that showed just 30% of servers on average are virtualized.

Virtual Stall - Fact or Fiction
Even so, many people cannot believe that virtual stall exists.

The outstanding promise (and to be fair, the substantial success) of virtualization puts blinkers on even the most assiduous observers. They see deployments happening, and assume virtual stall is just representative of a point in time on the virtualization journey, and that we are moving the needle every day. They see organizations that are 60%, 70%, or even 80% virtual and assume that virtual stall is really just a myth. They see organizations in different geographies and assume that virtual stall is just a U.S. concern. They see virtual stall as entirely avoidable, "simply" by applying the right preparation and planning.

Unfortunately, the truth is that most organizations are not overcoming virtual stall; most organizations are stuck at much lower rates of virtualization; virtual stall does affect organizations from around the world; and organizations cannot (at the very least do not) always overcome it simply with better plans.

The proof is in how consistent the indicators are.

Here Come the Facts
For example, the CDW-G 2010 Government Virtualization Report in July 2010 showed that an average of just 40% of total government infrastructure consists of virtual servers. Research conducted in Europe by leading industry analyst Kuppinger Cole in November 2010 shows that only 34% of organizations have deployed server virtualization for more than 50% of their systems. A new study by Cisco released in December 2010 polled organizations in the United States, Europe and India, and two-thirds of respondents said that less than half of their environment is virtualized. Even a CA Technologies survey conducted at the November 2010 Gartner ITxpo conference in Europe - a sophisticated audience of mostly large enterprises with access to good planning advice, which one would expect to show much greater virtualization maturity - still showed over half of the attendee respondents are less than 50% virtualized.

What Causes Virtual Stall?
The causes are legion, and often hard to overcome, but they are not all difficult to identify. Some key reasons include:

  • Costs: Of new hardware (yes, virtualization often needs new hardware - servers, storage, support), or virtualization licenses (even though many are looking at free alternatives to VMware), OS and application licenses (see next bullet), staff resourcing and training, and more.
  • Vendor licensing: Oracle is often cited for not certifying its products on non-Oracle virtualization platforms, but others like Microsoft and many smaller vendors are also guilty.
  • Staffing: Staff with virtualization certifications cost more to start with; but more importantly, virtualization resources cannot scale to manage two, three, or four times the VMs, and still apply the higher-level skills needed to virtualize more complex and mission-critical applications.
  • Business resistance: This is proven to be an issue time and again, where business owners do not allow IT to virtualize ‘their' application, and resist any changes that could end up sharing ‘their' server with other departments.
  • Security and compliance: IT staff may dismiss business owners' fear over virtualization security and compliance, but issues do exist in specific new threats (e.g. blue pill, red pill) and more mundane vulnerabilities (poor functional isolation, privileged user access, auditability, etc.).
  • Poor visibility: As virtualization gets more complex, it becomes increasingly hard to locate servers, detect problems, isolate faults, diagnose causes, or determine remediation, because virtual servers are by definition abstracted, and both add and hide a layer of complexity.
  • Increased dynamics: Virtualization allows extreme workload mobility, but this is also a threat, as rapid motion introduces its own problems (security of collocation, lack of isolation, problem isolation challenges, migration loops, etc.) and additional complexity.
  • Tool sophistication: As virtualization gets more broadly deployed, you cannot just use the same old tools, as they lack virtualization-specific information and capabilities; IT must start to grapple with a lot more manual activity, or swap out their management tools.
  • Silos of control: In a physical world the old silos of control (servers, storage, networking, etc.) worked, if not well, then acceptably; in a virtual world, these barriers are broken down, so in addition to LOB politics, IT has to grapple with internal politics, and providing appropriate skills.

Of course, this is not an exhaustive list. Other issues include facilities constraints, lack of insight into available capacity, existing "VM sprawl," poor suitability of some applications, lack of support for internally developed applications, added complexity of heterogeneous environments, high utilization of some existing servers, and poor management processes.

How to Solve - or Avoid - Virtual Stall
Certainly there are no silver bullets. However, some solutions are easy to identify, even though they may not always be easy to implement. Four key areas that IT needs to address include:

  • Visibility: IT must implement technologies and processes to provide visibility into the whole environment (systems, applications, middleware, hosts, guests, networks, storage, etc.). This includes integrated capabilities for deep discovery and inventory recording, application and system dependency mapping, resource capacity and utilization recording, identification of workloads and users, detection of configuration settings and drift, and detecting data leaking and data loss. This will help to achieve (and prove) security, compliance, capacity, availability, response, and SLA objectives by allowing IT to align performance, security, resources, and compliance with business policy, and enable more streamlined operations activity (e.g., problem triage and response) even in a more dynamic environment, to provide line of business confidence and reduce costs.
  • Control: Beyond seeing the environment and its problems, IT must take control with new technologies and processes to govern virtual environments. This should include capabilities that are integrated and broadly accessible across IT silos, to manage replication, migration, and continuity; to restrict access for privileged users; to reduce or eliminate the "rogue" VM deployments that lead to VM sprawl; to continuously manage provisioning, capacity, performance, and configuration; and to control allocation of resources (facilities, servers, storage, software, licenses, etc.) according to business policy. This helps to reduce IT staff skill requirements and costs, diminish the impact of IT silos, manage rapid migrations more effectively, and provide sufficient controls to convince business owners to expand tier 1 applications.
  • Assurance: To truly provide guarantees to business owners, IT needs to provide assurance that service performance will meet response, continuity, security, compliance, audit, experience, and uptime SLAs. Solutions can do this by providing rich visibility into end-to-end infrastructure and application performance, including traffic and response times; by tracking events, tracing incidents, identifying symptoms, and isolating root causes; and above all, by executing rapid remediation actions in real time to not just correct problems but to prevent them. This is going to build more trust from business owners, by meeting (and even exceeding) their compliance, satisfaction, performance, and approval goals, while also reducing staff costs (on triage etc.).
  • Automation: To address staffing issues, plus a host of compliance, audit, error reduction, and other challenges, IT must look to automate more mundane tasks, connect "known good tasks" into repeatable processes, and execute "known good processes" automatically too. Automating processes for provisioning/deprovisioning, configuration detection, patch and lifecycle remediation, monitoring and alerting, problem tracking and remediation, and end-user self-service will reduce the skill burden on highly trained staff, provide built-in documentation even for complex processes, allow junior and less-trained staff to do more complex work, reduce or eliminate human errors, and add security through functional isolation and process auditability.

The Bottom Line
Enhancing your virtualization management maturity by implementing these technologies and processes will help to eliminate virtual stall. Solutions with support for virtual, physical, cloud, server, network, storage, application and security needs; across multiple heterogeneous virtualization platforms, technologies, and vendors; solving specific issues today, but still scalable to extend and leverage investment into a strategic solution; will help to overcome the virtualization "tipping points" that lead to virtual stall.

Of course, some elements are simply beyond IT's direct control (e.g., vendor licensing), while others are not even a question of technology (e.g., poor existing processes). Moreover, virtualization maturity is not just a question of how many VMs you have, or what your server-to-VM ration is - virtualization maturity is also a question of how well you use the VMs you have, how sophisticated the virtualization deployment is, and more.

Nevertheless, by establishing these four key areas of virtualization management - visibility, control, assurance, and automation - most organizations will in a much better position to beat virtual stall, and deliver on the true outstanding promise of virtualization.


CA Technologies provides solutions that deliver virtualization visibility, control, assurance, and automation. For more information on please visit http://ca.com/virtualization.

More Stories By Andi Mann

Andi Mann is vice president of Strategic Solutions at CA Technologies. With more than 20 years’ experience across four continents, he has deep expertise of enterprise software on cloud, mainframe, midrange, server and desktop systems. He has worked within IT departments for governments and corporations, from small businesses to global multi-nationals; with several large enterprise software vendors; and as a leading industry analyst advising enterprises, governments, and IT vendors – from startups to the worlds’ largest companies. Andi is a co-author of the popular handbook, ‘Visible Ops – Private Cloud’; he blogs at ‘Andi Mann – Übergeek’ (http://pleasediscuss.com/andimann), and tweets as @AndiMann.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, provided an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data professionals...
"Once customers get a year into their IoT deployments, they start to realize that they may have been shortsighted in the ways they built out their deployment and the key thing I see a lot of people looking at is - how can I take equipment data, pull it back in an IoT solution and show it in a dashboard," stated Dave McCarthy, Director of Products at Bsquare Corporation, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
The cloud competition for database hosts is fierce. How do you evaluate a cloud provider for your database platform? In his session at 18th Cloud Expo, Chris Presley, a Solutions Architect at Pythian, gave users a checklist of considerations when choosing a provider. Chris Presley is a Solutions Architect at Pythian. He loves order – making him a premier Microsoft SQL Server expert. Not only has he programmed and administered SQL Server, but he has also shared his expertise and passion with b...
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
"IoT is going to be a huge industry with a lot of value for end users, for industries, for consumers, for manufacturers. How can we use cloud to effectively manage IoT applications," stated Ian Khan, Innovation & Marketing Manager at Solgeniakhela, in this SYS-CON.tv interview at @ThingsExpo, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Onalytica. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Predictive analytics tools monitor, report, and troubleshoot in order to make proactive decisions about the health, performance, and utilization of storage. Most enterprises combine cloud and on-premise storage, resulting in blended environments of physical, virtual, cloud, and other platforms, which justifies more sophisticated storage analytics. In his session at 18th Cloud Expo, Peter McCallum, Vice President of Datacenter Solutions at FalconStor, discussed using predictive analytics to mon...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of Soli...
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...