Related Topics: Containers Expo Blog, @CloudExpo

Containers Expo Blog: Blog Post

Quit Stalling: Overcoming the Barriers to Virtualization Deployments

Establishing four key areas of virtualization management

There is a well-known and outstanding promise of virtualization, that it can and does deliver significant IT and business benefits, including:

  • Substantial ROI: In hardware consolidation, power, rent, cooling, downtime, etc.
  • Greater agility: With fast IT support for business innovation, transformation, etc.
  • Improved continuity: Through hardware redundancy, site recovery, live migration, etc.
  • And many other business values

However, with more and more data coming through showing enterprises struggling to accelerate conversion and maturity of virtualization deployments, it is clear that "outstanding" in this context carries a dual meaning - not just in the sense of fantastic outcomes, but also undelivered outcomes.

The Facts Don't Lie - Or Do They?
Actually, the raw figures for virtualization adoption can be very misleading. Every survey and study shows clearly that 75%, 85%, or even 95% of organizations are adopting server virtualization; more and more we see that these same high proportions are deploying virtualization for production applications; and we see the volume of new servers and new applications utilizing virtualization breaking well past the 50% range.

However, these stats do not tell the whole story.

What's missing is how and why virtualization deployments are actually stalling within a majority of enterprises. Typically as a virtualization deployment reaches around 30-40% of servers, IT is unable to scale up with the resources and processes that got them to that point. As a result, a virtualization deployment slows down or stops altogether. This is called "virtual stall" - the inability to overcome the "tipping points" needed to move the needle on virtualization maturity.

I have cited data throughout 2010 that shows this - such as the CDW Server Virtualization Life Cycle Report that showed only 34% of total server infrastructure consists of virtual servers; or the Forrester Research from May this year (conducted for CA) that showed just 30% of servers on average are virtualized.

Virtual Stall - Fact or Fiction
Even so, many people cannot believe that virtual stall exists.

The outstanding promise (and to be fair, the substantial success) of virtualization puts blinkers on even the most assiduous observers. They see deployments happening, and assume virtual stall is just representative of a point in time on the virtualization journey, and that we are moving the needle every day. They see organizations that are 60%, 70%, or even 80% virtual and assume that virtual stall is really just a myth. They see organizations in different geographies and assume that virtual stall is just a U.S. concern. They see virtual stall as entirely avoidable, "simply" by applying the right preparation and planning.

Unfortunately, the truth is that most organizations are not overcoming virtual stall; most organizations are stuck at much lower rates of virtualization; virtual stall does affect organizations from around the world; and organizations cannot (at the very least do not) always overcome it simply with better plans.

The proof is in how consistent the indicators are.

Here Come the Facts
For example, the CDW-G 2010 Government Virtualization Report in July 2010 showed that an average of just 40% of total government infrastructure consists of virtual servers. Research conducted in Europe by leading industry analyst Kuppinger Cole in November 2010 shows that only 34% of organizations have deployed server virtualization for more than 50% of their systems. A new study by Cisco released in December 2010 polled organizations in the United States, Europe and India, and two-thirds of respondents said that less than half of their environment is virtualized. Even a CA Technologies survey conducted at the November 2010 Gartner ITxpo conference in Europe - a sophisticated audience of mostly large enterprises with access to good planning advice, which one would expect to show much greater virtualization maturity - still showed over half of the attendee respondents are less than 50% virtualized.

What Causes Virtual Stall?
The causes are legion, and often hard to overcome, but they are not all difficult to identify. Some key reasons include:

  • Costs: Of new hardware (yes, virtualization often needs new hardware - servers, storage, support), or virtualization licenses (even though many are looking at free alternatives to VMware), OS and application licenses (see next bullet), staff resourcing and training, and more.
  • Vendor licensing: Oracle is often cited for not certifying its products on non-Oracle virtualization platforms, but others like Microsoft and many smaller vendors are also guilty.
  • Staffing: Staff with virtualization certifications cost more to start with; but more importantly, virtualization resources cannot scale to manage two, three, or four times the VMs, and still apply the higher-level skills needed to virtualize more complex and mission-critical applications.
  • Business resistance: This is proven to be an issue time and again, where business owners do not allow IT to virtualize ‘their' application, and resist any changes that could end up sharing ‘their' server with other departments.
  • Security and compliance: IT staff may dismiss business owners' fear over virtualization security and compliance, but issues do exist in specific new threats (e.g. blue pill, red pill) and more mundane vulnerabilities (poor functional isolation, privileged user access, auditability, etc.).
  • Poor visibility: As virtualization gets more complex, it becomes increasingly hard to locate servers, detect problems, isolate faults, diagnose causes, or determine remediation, because virtual servers are by definition abstracted, and both add and hide a layer of complexity.
  • Increased dynamics: Virtualization allows extreme workload mobility, but this is also a threat, as rapid motion introduces its own problems (security of collocation, lack of isolation, problem isolation challenges, migration loops, etc.) and additional complexity.
  • Tool sophistication: As virtualization gets more broadly deployed, you cannot just use the same old tools, as they lack virtualization-specific information and capabilities; IT must start to grapple with a lot more manual activity, or swap out their management tools.
  • Silos of control: In a physical world the old silos of control (servers, storage, networking, etc.) worked, if not well, then acceptably; in a virtual world, these barriers are broken down, so in addition to LOB politics, IT has to grapple with internal politics, and providing appropriate skills.

Of course, this is not an exhaustive list. Other issues include facilities constraints, lack of insight into available capacity, existing "VM sprawl," poor suitability of some applications, lack of support for internally developed applications, added complexity of heterogeneous environments, high utilization of some existing servers, and poor management processes.

How to Solve - or Avoid - Virtual Stall
Certainly there are no silver bullets. However, some solutions are easy to identify, even though they may not always be easy to implement. Four key areas that IT needs to address include:

  • Visibility: IT must implement technologies and processes to provide visibility into the whole environment (systems, applications, middleware, hosts, guests, networks, storage, etc.). This includes integrated capabilities for deep discovery and inventory recording, application and system dependency mapping, resource capacity and utilization recording, identification of workloads and users, detection of configuration settings and drift, and detecting data leaking and data loss. This will help to achieve (and prove) security, compliance, capacity, availability, response, and SLA objectives by allowing IT to align performance, security, resources, and compliance with business policy, and enable more streamlined operations activity (e.g., problem triage and response) even in a more dynamic environment, to provide line of business confidence and reduce costs.
  • Control: Beyond seeing the environment and its problems, IT must take control with new technologies and processes to govern virtual environments. This should include capabilities that are integrated and broadly accessible across IT silos, to manage replication, migration, and continuity; to restrict access for privileged users; to reduce or eliminate the "rogue" VM deployments that lead to VM sprawl; to continuously manage provisioning, capacity, performance, and configuration; and to control allocation of resources (facilities, servers, storage, software, licenses, etc.) according to business policy. This helps to reduce IT staff skill requirements and costs, diminish the impact of IT silos, manage rapid migrations more effectively, and provide sufficient controls to convince business owners to expand tier 1 applications.
  • Assurance: To truly provide guarantees to business owners, IT needs to provide assurance that service performance will meet response, continuity, security, compliance, audit, experience, and uptime SLAs. Solutions can do this by providing rich visibility into end-to-end infrastructure and application performance, including traffic and response times; by tracking events, tracing incidents, identifying symptoms, and isolating root causes; and above all, by executing rapid remediation actions in real time to not just correct problems but to prevent them. This is going to build more trust from business owners, by meeting (and even exceeding) their compliance, satisfaction, performance, and approval goals, while also reducing staff costs (on triage etc.).
  • Automation: To address staffing issues, plus a host of compliance, audit, error reduction, and other challenges, IT must look to automate more mundane tasks, connect "known good tasks" into repeatable processes, and execute "known good processes" automatically too. Automating processes for provisioning/deprovisioning, configuration detection, patch and lifecycle remediation, monitoring and alerting, problem tracking and remediation, and end-user self-service will reduce the skill burden on highly trained staff, provide built-in documentation even for complex processes, allow junior and less-trained staff to do more complex work, reduce or eliminate human errors, and add security through functional isolation and process auditability.

The Bottom Line
Enhancing your virtualization management maturity by implementing these technologies and processes will help to eliminate virtual stall. Solutions with support for virtual, physical, cloud, server, network, storage, application and security needs; across multiple heterogeneous virtualization platforms, technologies, and vendors; solving specific issues today, but still scalable to extend and leverage investment into a strategic solution; will help to overcome the virtualization "tipping points" that lead to virtual stall.

Of course, some elements are simply beyond IT's direct control (e.g., vendor licensing), while others are not even a question of technology (e.g., poor existing processes). Moreover, virtualization maturity is not just a question of how many VMs you have, or what your server-to-VM ration is - virtualization maturity is also a question of how well you use the VMs you have, how sophisticated the virtualization deployment is, and more.

Nevertheless, by establishing these four key areas of virtualization management - visibility, control, assurance, and automation - most organizations will in a much better position to beat virtual stall, and deliver on the true outstanding promise of virtualization.

CA Technologies provides solutions that deliver virtualization visibility, control, assurance, and automation. For more information on please visit http://ca.com/virtualization.

More Stories By Andi Mann

Andi Mann is vice president of Strategic Solutions at CA Technologies. With more than 20 years’ experience across four continents, he has deep expertise of enterprise software on cloud, mainframe, midrange, server and desktop systems. He has worked within IT departments for governments and corporations, from small businesses to global multi-nationals; with several large enterprise software vendors; and as a leading industry analyst advising enterprises, governments, and IT vendors – from startups to the worlds’ largest companies. Andi is a co-author of the popular handbook, ‘Visible Ops – Private Cloud’; he blogs at ‘Andi Mann – Übergeek’ (http://pleasediscuss.com/andimann), and tweets as @AndiMann.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Latest Stories
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
SYS-CON Events announced today that Isomorphic Software will exhibit at DevOps Summit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Isomorphic Software provides the SmartClient HTML5/AJAX platform, the most advanced technology for building rich, cutting-edge enterprise web applications for desktop and mobile. SmartClient combines the productivity and performance of traditional desktop software with the simp...
Fifty billion connected devices and still no winning protocols standards. HTTP, WebSockets, MQTT, and CoAP seem to be leading in the IoT protocol race at the moment but many more protocols are getting introduced on a regular basis. Each protocol has its pros and cons depending on the nature of the communications. Does there really need to be only one protocol to rule them all? Of course not. In his session at @ThingsExpo, Chris Matthieu, co-founder and CTO of Octoblu, walk you through how Oct...
SYS-CON Events announced today that Embotics, the cloud automation company, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Embotics is the cloud automation company for IT organizations and service providers that need to improve provisioning or enable self-service capabilities. With a relentless focus on delivering a premier user experience and unmatched customer support, Embotics is the fas...
The Internet of Things (IoT), in all its myriad manifestations, has great potential. Much of that potential comes from the evolving data management and analytic (DMA) technologies and processes that allow us to gain insight from all of the IoT data that can be generated and gathered. This potential may never be met as those data sets are tied to specific industry verticals and single markets, with no clear way to use IoT data and sensor analytics to fulfill the hype being given the IoT today.
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
@ThingsExpo has been named the Top 5 Most Influential M2M Brand by Onalytica in the ‘Machine to Machine: Top 100 Influencers and Brands.' Onalytica analyzed the online debate on M2M by looking at over 85,000 tweets to provide the most influential individuals and brands that drive the discussion. According to Onalytica the "analysis showed a very engaged community with a lot of interactive tweets. The M2M discussion seems to be more fragmented and driven by some of the major brands present in the...
Traditional on-premises data centers have long been the domain of modern data platforms like Apache Hadoop, meaning companies who build their business on public cloud were challenged to run Big Data processing and analytics at scale. But recent advancements in Hadoop performance, security, and most importantly cloud-native integrations, are giving organizations the ability to truly gain value from all their data. In his session at 19th Cloud Expo, David Tishgart, Director of Product Marketing ...
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
SYS-CON Events announced today that Pulzze Systems will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Pulzze Systems, Inc. provides infrastructure products for the Internet of Things to enable any connected device and system to carry out matched operations without programming. For more information, visit http://www.pulzzesystems.com.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that e...
The Quantified Economy represents the total global addressable market (TAM) for IoT that, according to a recent IDC report, will grow to an unprecedented $1.3 trillion by 2019. With this the third wave of the Internet-global proliferation of connected devices, appliances and sensors is poised to take off in 2016. In his session at @ThingsExpo, David McLauchlan, CEO and co-founder of Buddy Platform, discussed how the ability to access and analyze the massive volume of streaming data from millio...
SYS-CON Events announced today that Interface Masters Technologies, a leader in Network Visibility and Uptime Solutions, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Interface Masters Technologies is a leading vendor in the network monitoring and high speed networking markets. Based in the heart of Silicon Valley, Interface Masters' expertise lies in Gigabit, 10 Gigabit and 40 Gigabit Eth...
Successful digital transformation requires new organizational competencies and capabilities. Research tells us that the biggest impediment to successful transformation is human; consequently, the biggest enabler is a properly skilled and empowered workforce. In the digital age, new individual and collective competencies are required. In his session at 19th Cloud Expo, Bob Newhouse, CEO and founder of Agilitiv, will draw together recent research and lessons learned from emerging and established ...