Welcome!

Blog Feed Post

Interview with CloudTech - Why virtualisation isn't enough in cloud computing

I was recently interviewed for an article with CloudTech again around the topic of whether virtualisation in itself was enough for a successful cloud computing deployment. Below is an excerpt of the article. For the full article which also includes viewpoints from other analysts please follow the link:
While it is generally recognised that virtualisation is an important step in the move to cloud computing, as it enables efficient use of the underlying hardware and allows for true scalability,  for virtualisation in order to be truly valuable it really needs to understand the workloads that run on it and offer clear visibility of both the virtual and physical worlds.

On its own, virtualisation does not lend itself to creating sufficient visibility about the multiple applications and services running at any one time. For this reason a primitive automation system could cause a number of errors to occur, such as the spinning up of another virtual machine to offset the load on enterprise applications that are presumed to be overloaded.
Well that’s the argument that was presented by Karthikeyan Subramaniam in his Infoworld article last year, and his viewpoint is supported by experts at converged cloud vendor VCE.
“I agree absolutely because server virtualisation has created an unprecedented shift and transformation in the way datacentres are provisioned and managed”, affirms Archie Hendryx – VCE’s Principal vArchitect. He adds that, "server virtualisation has brought with it a new layer of abstraction and consequently a new challenge to monitor and optimise applications."
Hendryx has also experienced first hand how customers address this challenge "as a converged architecture enables customers to quickly embark on a virtualisation journey that mitigates risks and ensures that they increase their P to V ratio compared to standard deployments.”
In his view there's a need to develop new ways of monitoring provides end users more visibility concerning the complexities of their applications, their interdependencies and how they correlate with the virtualised infrastructure. “Our customers are now looking at how they can bring an end-to-end monitoring solution to their virtualised infrastructure and applications to their environments”, he says. In his experience this is because customers want their applications to have the same benefits of orchestration, automation, resource distribution and reclamation that they obtained with their hypervisor.
Virtual and physical correlations
Hendryx adds: “By having a hypervisor you would have several operating system (OS) instances and applications. So for visibility you would need to correlate what is occurring on the virtual machine and the underlying physical server, with what is happening with the numerous applications.” He therefore believes that the challenge is to try to understand the behaviour of an underlying hypervisor that has several applications running simultaneously on it. For example, if a memory issue were to arise relating to an operating system of a virtual machine, it would be possible to find that the application either has no memory left, or it might be constrained, yet the hypervisor might still present metrics that there is sufficient memory available.
Hendryx says these situations are quite common: “This is because the memory metrics – from a hypervisor perspective – are not reflective of the application as the hypervisor has no visibility into how its virtual machines are using their allocated memory.” The problem being that the hypervisor has no knowledge of whether the memory it allocated to a virtual machine is, for cache, paging or pooled memory. What it understands in actuality is that it has made provision for memory and this is why errors can often occur.
Complexities
This lack of inherent visibility and correlation between the hypervisor, the operating system and the applications that run them could cause another virtual machine to spin up. “This disparity occurs because setting up a complex group of applications is far more complicated than setting up a virtual machine”, says Hendryx. There is no point in cloning a virtual machine with an encapsulated virtual machine either; this approach just won’t work, and that’s because it will fail to address what he describes as “the complexity of multi-tiered applications and their dynamically changing workloads.”
It’s therefore a must to have some application monitoring in place that correlates with the metrics that are being constantly monitored by the hypervisor and the application interdependencies.
“The other error that commonly occurs is caused when the process associated with provisioning is flawed and not addressed”, he comments. When this occurs the automation of that process will remain unsound to the extent that further issues may arise. He adds that automation from a virtual machine level will fail to allocate its resources adequately to the key applications and this will have a negative impact on response times and throughput – leading to poor performance.
Possible solutions
According to Hendryx, VCE has ensured customers have visibility within a virtualised and converged cloud environment by deploying VMWare’s vCenter Operations Manager to monitor the Vblock’s resource utilisation. He adds that “VMware’s Hyperic and Infrastructure Navigator has provided them with the visibility of virtual machine to application mapping as well as application performance monitoring, to give them the necessary correlation between applications, operating system, virtual machine and server…” It also offers them the visibility that has been so lacking.
Archie Hendryx then concluded with best practices for virtualisation within a converged infrastructure:
1. If it’s successful and repeatable, then it’s worth standardising and automating because automation will enable you to make successful processes repeatable.
2.  Orchestrate it because even when a converged infrastructure is deployed there will still need to be changes that require rolling out; such as operating system updates, capacity changes, security events, load-balancing or application completions. These will all need to be placed in a certain order and you can automate the orchestration process.
3.  Simplify the front end by recognising that virtualisation has transformed your environment into a resource pool that end users should be able to request and provision for themselves and be consequently charged for. This may involve eliminating manual processes in favour of automated workflows, and simplification will enable a business to recognise the benefits of virtualisation.
4.  Manage and monitor: You can’t manage and monitor what you can’t see. For this reason VCE customers have an API that provides visibility and context to all of the individual components within a Vblock. They benefit from integration with VMware’s vCenter and vCenter Operations Manager and VCE’s API called Vision IOS. From these VCE’s customers gain visibility and the ability to immediately discover, identify and validate all of the components and firmware levels within the converged infrastructure as well as monitor its end-to-end health. This helps to eliminate any bottlenecks that might otherwise occur by allowing overly provisioned resources to be reclaimed.

Read the original blog entry...

More Stories By Archie Hendryx

SAN, NAS, Back Up / Recovery & Virtualisation Specialist.

Latest Stories
SYS-CON Events announced today that Pulzze Systems will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Pulzze Systems, Inc. provides infrastructure products for the Internet of Things to enable any connected device and system to carry out matched operations without programming. For more information, visit http://www.pulzzesystems.com.
If you’re responsible for an application that depends on the data or functionality of various IoT endpoints – either sensors or devices – your brand reputation depends on the security, reliability, and compliance of its many integrated parts. If your application fails to deliver the expected business results, your customers and partners won't care if that failure stems from the code you developed or from a component that you integrated. What can you do to ensure that the endpoints work as expect...
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...
The Internet of Things can drive efficiency for airlines and airports. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Sudip Majumder, senior director of development at Oracle, will discuss the technical details of the connected airline baggage and related social media solutions. These IoT applications will enhance travelers' journey experience and drive efficiency for the airlines and the airports. The session will include a working demo and a technical d...
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management solutions, helping companies worldwide activate their data to drive more value and business insight and to transform moder...
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace.
SYS-CON Events announced today that Bsquare has been named “Silver Sponsor” of SYS-CON's @ThingsExpo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. For more than two decades, Bsquare has helped its customers extract business value from a broad array of physical assets by making them intelligent, connecting them, and using the data they generate to optimize business processes.
I'm a lonely sensor. I spend all day telling the world how I'm feeling, but none of the other sensors seem to care. I want to be connected. I want to build relationships with other sensors to be more useful for my human. I want my human to understand that when my friends next door are too hot for a while, I'll soon be flaming. And when all my friends go outside without me, I may be left behind. Don't just log my data; use the relationship graph. In his session at @ThingsExpo, Ryan Boyd, Engi...
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
SYS-CON Events announced today that Secure Channels will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. The bedrock of Secure Channels Technology is a uniquely modified and enhanced process based on superencipherment. Superencipherment is the process of encrypting an already encrypted message one or more times, either using the same or a different algorithm.
Traditional on-premises data centers have long been the domain of modern data platforms like Apache Hadoop, meaning companies who build their business on public cloud were challenged to run Big Data processing and analytics at scale. But recent advancements in Hadoop performance, security, and most importantly cloud-native integrations, are giving organizations the ability to truly gain value from all their data. In his session at 19th Cloud Expo, David Tishgart, Director of Product Marketing ...
Almost two-thirds of companies either have or soon will have IoT as the backbone of their business in 2016. However, IoT is far more complex than most firms expected. How can you not get trapped in the pitfalls? In his session at @ThingsExpo, Tony Shan, a renowned visionary and thought leader, will introduce a holistic method of IoTification, which is the process of IoTifying the existing technology and business models to adopt and leverage IoT. He will drill down to the components in this fra...
The vision of a connected smart home is becoming reality with the application of integrated wireless technologies in devices and appliances. The use of standardized and TCP/IP networked wireless technologies in line-powered and battery operated sensors and controls has led to the adoption of radios in the 2.4GHz band, including Wi-Fi, BT/BLE and 802.15.4 applied ZigBee and Thread. This is driving the need for robust wireless coexistence for multiple radios to ensure throughput performance and th...
Enterprise IT has been in the era of Hybrid Cloud for some time now. But it seems most conversations about Hybrid are focused on integrating AWS, Microsoft Azure, or Google ECM into existing on-premises systems. Where is all the Private Cloud? What do technology providers need to do to make their offerings more compelling? How should enterprise IT executives and buyers define their focus, needs, and roadmap, and communicate that clearly to the providers?
SYS-CON Events announced today the Kubernetes and Google Container Engine Workshop, being held November 3, 2016, in conjunction with @DevOpsSummit at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA. This workshop led by Sebastian Scheele introduces participants to Kubernetes and Google Container Engine (GKE). Through a combination of instructor-led presentations, demonstrations, and hands-on labs, students learn the key concepts and practices for deploying and maintainin...