|By Andi Mann||
|December 22, 2010 02:00 PM EST||
There is a well-known and outstanding promise of virtualization, that it can and does deliver significant IT and business benefits, including:
- Substantial ROI: In hardware consolidation, power, rent, cooling, downtime, etc.
- Greater agility: With fast IT support for business innovation, transformation, etc.
- Improved continuity: Through hardware redundancy, site recovery, live migration, etc.
- And many other business values
However, with more and more data coming through showing enterprises struggling to accelerate conversion and maturity of virtualization deployments, it is clear that "outstanding" in this context carries a dual meaning - not just in the sense of fantastic outcomes, but also undelivered outcomes.
The Facts Don't Lie - Or Do They?
Actually, the raw figures for virtualization adoption can be very misleading. Every survey and study shows clearly that 75%, 85%, or even 95% of organizations are adopting server virtualization; more and more we see that these same high proportions are deploying virtualization for production applications; and we see the volume of new servers and new applications utilizing virtualization breaking well past the 50% range.
However, these stats do not tell the whole story.
What's missing is how and why virtualization deployments are actually stalling within a majority of enterprises. Typically as a virtualization deployment reaches around 30-40% of servers, IT is unable to scale up with the resources and processes that got them to that point. As a result, a virtualization deployment slows down or stops altogether. This is called "virtual stall" - the inability to overcome the "tipping points" needed to move the needle on virtualization maturity.
I have cited data throughout 2010 that shows this - such as the CDW Server Virtualization Life Cycle Report that showed only 34% of total server infrastructure consists of virtual servers; or the Forrester Research from May this year (conducted for CA) that showed just 30% of servers on average are virtualized.
Virtual Stall - Fact or Fiction
Even so, many people cannot believe that virtual stall exists.
The outstanding promise (and to be fair, the substantial success) of virtualization puts blinkers on even the most assiduous observers. They see deployments happening, and assume virtual stall is just representative of a point in time on the virtualization journey, and that we are moving the needle every day. They see organizations that are 60%, 70%, or even 80% virtual and assume that virtual stall is really just a myth. They see organizations in different geographies and assume that virtual stall is just a U.S. concern. They see virtual stall as entirely avoidable, "simply" by applying the right preparation and planning.
Unfortunately, the truth is that most organizations are not overcoming virtual stall; most organizations are stuck at much lower rates of virtualization; virtual stall does affect organizations from around the world; and organizations cannot (at the very least do not) always overcome it simply with better plans.
The proof is in how consistent the indicators are.
Here Come the Facts
For example, the CDW-G 2010 Government Virtualization Report in July 2010 showed that an average of just 40% of total government infrastructure consists of virtual servers. Research conducted in Europe by leading industry analyst Kuppinger Cole in November 2010 shows that only 34% of organizations have deployed server virtualization for more than 50% of their systems. A new study by Cisco released in December 2010 polled organizations in the United States, Europe and India, and two-thirds of respondents said that less than half of their environment is virtualized. Even a CA Technologies survey conducted at the November 2010 Gartner ITxpo conference in Europe - a sophisticated audience of mostly large enterprises with access to good planning advice, which one would expect to show much greater virtualization maturity - still showed over half of the attendee respondents are less than 50% virtualized.
What Causes Virtual Stall?
The causes are legion, and often hard to overcome, but they are not all difficult to identify. Some key reasons include:
- Costs: Of new hardware (yes, virtualization often needs new hardware - servers, storage, support), or virtualization licenses (even though many are looking at free alternatives to VMware), OS and application licenses (see next bullet), staff resourcing and training, and more.
- Vendor licensing: Oracle is often cited for not certifying its products on non-Oracle virtualization platforms, but others like Microsoft and many smaller vendors are also guilty.
- Staffing: Staff with virtualization certifications cost more to start with; but more importantly, virtualization resources cannot scale to manage two, three, or four times the VMs, and still apply the higher-level skills needed to virtualize more complex and mission-critical applications.
- Business resistance: This is proven to be an issue time and again, where business owners do not allow IT to virtualize ‘their' application, and resist any changes that could end up sharing ‘their' server with other departments.
- Security and compliance: IT staff may dismiss business owners' fear over virtualization security and compliance, but issues do exist in specific new threats (e.g. blue pill, red pill) and more mundane vulnerabilities (poor functional isolation, privileged user access, auditability, etc.).
- Poor visibility: As virtualization gets more complex, it becomes increasingly hard to locate servers, detect problems, isolate faults, diagnose causes, or determine remediation, because virtual servers are by definition abstracted, and both add and hide a layer of complexity.
- Increased dynamics: Virtualization allows extreme workload mobility, but this is also a threat, as rapid motion introduces its own problems (security of collocation, lack of isolation, problem isolation challenges, migration loops, etc.) and additional complexity.
- Tool sophistication: As virtualization gets more broadly deployed, you cannot just use the same old tools, as they lack virtualization-specific information and capabilities; IT must start to grapple with a lot more manual activity, or swap out their management tools.
- Silos of control: In a physical world the old silos of control (servers, storage, networking, etc.) worked, if not well, then acceptably; in a virtual world, these barriers are broken down, so in addition to LOB politics, IT has to grapple with internal politics, and providing appropriate skills.
Of course, this is not an exhaustive list. Other issues include facilities constraints, lack of insight into available capacity, existing "VM sprawl," poor suitability of some applications, lack of support for internally developed applications, added complexity of heterogeneous environments, high utilization of some existing servers, and poor management processes.
How to Solve - or Avoid - Virtual Stall
Certainly there are no silver bullets. However, some solutions are easy to identify, even though they may not always be easy to implement. Four key areas that IT needs to address include:
- Visibility: IT must implement technologies and processes to provide visibility into the whole environment (systems, applications, middleware, hosts, guests, networks, storage, etc.). This includes integrated capabilities for deep discovery and inventory recording, application and system dependency mapping, resource capacity and utilization recording, identification of workloads and users, detection of configuration settings and drift, and detecting data leaking and data loss. This will help to achieve (and prove) security, compliance, capacity, availability, response, and SLA objectives by allowing IT to align performance, security, resources, and compliance with business policy, and enable more streamlined operations activity (e.g., problem triage and response) even in a more dynamic environment, to provide line of business confidence and reduce costs.
- Control: Beyond seeing the environment and its problems, IT must take control with new technologies and processes to govern virtual environments. This should include capabilities that are integrated and broadly accessible across IT silos, to manage replication, migration, and continuity; to restrict access for privileged users; to reduce or eliminate the "rogue" VM deployments that lead to VM sprawl; to continuously manage provisioning, capacity, performance, and configuration; and to control allocation of resources (facilities, servers, storage, software, licenses, etc.) according to business policy. This helps to reduce IT staff skill requirements and costs, diminish the impact of IT silos, manage rapid migrations more effectively, and provide sufficient controls to convince business owners to expand tier 1 applications.
- Assurance: To truly provide guarantees to business owners, IT needs to provide assurance that service performance will meet response, continuity, security, compliance, audit, experience, and uptime SLAs. Solutions can do this by providing rich visibility into end-to-end infrastructure and application performance, including traffic and response times; by tracking events, tracing incidents, identifying symptoms, and isolating root causes; and above all, by executing rapid remediation actions in real time to not just correct problems but to prevent them. This is going to build more trust from business owners, by meeting (and even exceeding) their compliance, satisfaction, performance, and approval goals, while also reducing staff costs (on triage etc.).
- Automation: To address staffing issues, plus a host of compliance, audit, error reduction, and other challenges, IT must look to automate more mundane tasks, connect "known good tasks" into repeatable processes, and execute "known good processes" automatically too. Automating processes for provisioning/deprovisioning, configuration detection, patch and lifecycle remediation, monitoring and alerting, problem tracking and remediation, and end-user self-service will reduce the skill burden on highly trained staff, provide built-in documentation even for complex processes, allow junior and less-trained staff to do more complex work, reduce or eliminate human errors, and add security through functional isolation and process auditability.
The Bottom Line
Enhancing your virtualization management maturity by implementing these technologies and processes will help to eliminate virtual stall. Solutions with support for virtual, physical, cloud, server, network, storage, application and security needs; across multiple heterogeneous virtualization platforms, technologies, and vendors; solving specific issues today, but still scalable to extend and leverage investment into a strategic solution; will help to overcome the virtualization "tipping points" that lead to virtual stall.
Of course, some elements are simply beyond IT's direct control (e.g., vendor licensing), while others are not even a question of technology (e.g., poor existing processes). Moreover, virtualization maturity is not just a question of how many VMs you have, or what your server-to-VM ration is - virtualization maturity is also a question of how well you use the VMs you have, how sophisticated the virtualization deployment is, and more.
Nevertheless, by establishing these four key areas of virtualization management - visibility, control, assurance, and automation - most organizations will in a much better position to beat virtual stall, and deliver on the true outstanding promise of virtualization.
CA Technologies provides solutions that deliver virtualization visibility, control, assurance, and automation. For more information on please visit http://ca.com/virtualization.
There are 66 million network cameras capturing terabytes of data. How did factories in Japan improve physical security at the facilities and improve employee productivity? Edge Computing reduces possible kilobytes of data collected per second to only a few kilobytes of data transmitted to the public cloud every day. Data is aggregated and analyzed close to sensors so only intelligent results need to be transmitted to the cloud. Non-essential data is recycled to optimize storage.
Feb. 20, 2017 08:15 AM EST Reads: 1,135
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
Feb. 20, 2017 07:45 AM EST
WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC.
Feb. 20, 2017 07:30 AM EST Reads: 129
Information technology (IT) advances are transforming the way we innovate in business, thereby disrupting the old guard and their predictable status-quo. It’s creating global market turbulence. Industries are converging, and new opportunities and threats are emerging, like never before. So, how are savvy chief information officers (CIOs) leading this transition? Back in 2015, the IBM Institute for Business Value conducted a market study that included the findings from over 1,800 CIO interviews ...
Feb. 20, 2017 07:30 AM EST Reads: 1,381
Stratoscale, the software company developing the next generation data center operating system, exhibited at SYS-CON's 18th International Cloud Expo®, which took place at the Javits Center in New York City, NY, in June 2016.Stratoscale is revolutionizing the data center with a zero-to-cloud-in-minutes solution. With Stratoscale’s hardware-agnostic, Software Defined Data Center (SDDC) solution to store everything, run anything and scale everywhere, IT is empowered to take control of their data ce...
Feb. 20, 2017 07:15 AM EST Reads: 664
In his session at @DevOpsSummit at 19th Cloud Expo, Robert Doyle, lead architect at eCube Systems, will examine the issues and need for an agile infrastructure and show the advantages of capturing developer knowledge in an exportable file for migration into production. He will introduce the use of NXTmonitor, a next-generation DevOps tool that captures application environments, dependencies and start/stop procedures in a portable configuration file with an easy-to-use GUI. In addition to captur...
Feb. 20, 2017 07:15 AM EST Reads: 3,376
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
Feb. 20, 2017 07:15 AM EST Reads: 103
SYS-CON Events announced today that SD Times | BZ Media has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. BZ Media LLC is a high-tech media company that produces technical conferences and expositions, and publishes a magazine, newsletters and websites in the software development, SharePoint, mobile development and commercial UAV markets.
Feb. 20, 2017 07:00 AM EST Reads: 1,444
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain.
Feb. 20, 2017 06:45 AM EST Reads: 257
In the first article of this three-part series on hybrid cloud security, we discussed the Shared Responsibility Model and examined how the most common attack strategies persist, are amplified, or are mitigated as assets move from data centers to the cloud. Today, we’ll look at some of the unique security challenges that are introduced by public cloud environments. While cloud computing delivers many operational, cost-saving and security benefits, it takes place in a public, shared and on-demand ...
Feb. 20, 2017 06:30 AM EST Reads: 1,086
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
Feb. 20, 2017 06:00 AM EST Reads: 1,611
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
Feb. 20, 2017 05:30 AM EST Reads: 4,578
With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right ...
Feb. 20, 2017 03:30 AM EST Reads: 6,153
“We're a global managed hosting provider. Our core customer set is a U.S.-based customer that is looking to go global,” explained Adam Rogers, Managing Director at ANEXIA, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Feb. 20, 2017 03:15 AM EST Reads: 1,340
In today's uber-connected, consumer-centric, cloud-enabled, insights-driven, multi-device, global world, the focus of solutions has shifted from the product that is sold to the person who is buying the product or service. Enterprises have rebranded their business around the consumers of their products. The buyer is the person and the focus is not on the offering. The person is connected through multiple devices, wearables, at home, on the road, and in multiple locations, sometimes simultaneously...
Feb. 20, 2017 02:00 AM EST Reads: 6,118