Welcome!

Blog Feed Post

Interview with CloudTech - Why virtualisation isn't enough in cloud computing

I was recently interviewed for an article with CloudTech again around the topic of whether virtualisation in itself was enough for a successful cloud computing deployment. Below is an excerpt of the article. For the full article which also includes viewpoints from other analysts please follow the link:
While it is generally recognised that virtualisation is an important step in the move to cloud computing, as it enables efficient use of the underlying hardware and allows for true scalability,  for virtualisation in order to be truly valuable it really needs to understand the workloads that run on it and offer clear visibility of both the virtual and physical worlds.

On its own, virtualisation does not lend itself to creating sufficient visibility about the multiple applications and services running at any one time. For this reason a primitive automation system could cause a number of errors to occur, such as the spinning up of another virtual machine to offset the load on enterprise applications that are presumed to be overloaded.
Well that’s the argument that was presented by Karthikeyan Subramaniam in his Infoworld article last year, and his viewpoint is supported by experts at converged cloud vendor VCE.
“I agree absolutely because server virtualisation has created an unprecedented shift and transformation in the way datacentres are provisioned and managed”, affirms Archie Hendryx – VCE’s Principal vArchitect. He adds that, "server virtualisation has brought with it a new layer of abstraction and consequently a new challenge to monitor and optimise applications."
Hendryx has also experienced first hand how customers address this challenge "as a converged architecture enables customers to quickly embark on a virtualisation journey that mitigates risks and ensures that they increase their P to V ratio compared to standard deployments.”
In his view there's a need to develop new ways of monitoring provides end users more visibility concerning the complexities of their applications, their interdependencies and how they correlate with the virtualised infrastructure. “Our customers are now looking at how they can bring an end-to-end monitoring solution to their virtualised infrastructure and applications to their environments”, he says. In his experience this is because customers want their applications to have the same benefits of orchestration, automation, resource distribution and reclamation that they obtained with their hypervisor.
Virtual and physical correlations
Hendryx adds: “By having a hypervisor you would have several operating system (OS) instances and applications. So for visibility you would need to correlate what is occurring on the virtual machine and the underlying physical server, with what is happening with the numerous applications.” He therefore believes that the challenge is to try to understand the behaviour of an underlying hypervisor that has several applications running simultaneously on it. For example, if a memory issue were to arise relating to an operating system of a virtual machine, it would be possible to find that the application either has no memory left, or it might be constrained, yet the hypervisor might still present metrics that there is sufficient memory available.
Hendryx says these situations are quite common: “This is because the memory metrics – from a hypervisor perspective – are not reflective of the application as the hypervisor has no visibility into how its virtual machines are using their allocated memory.” The problem being that the hypervisor has no knowledge of whether the memory it allocated to a virtual machine is, for cache, paging or pooled memory. What it understands in actuality is that it has made provision for memory and this is why errors can often occur.
Complexities
This lack of inherent visibility and correlation between the hypervisor, the operating system and the applications that run them could cause another virtual machine to spin up. “This disparity occurs because setting up a complex group of applications is far more complicated than setting up a virtual machine”, says Hendryx. There is no point in cloning a virtual machine with an encapsulated virtual machine either; this approach just won’t work, and that’s because it will fail to address what he describes as “the complexity of multi-tiered applications and their dynamically changing workloads.”
It’s therefore a must to have some application monitoring in place that correlates with the metrics that are being constantly monitored by the hypervisor and the application interdependencies.
“The other error that commonly occurs is caused when the process associated with provisioning is flawed and not addressed”, he comments. When this occurs the automation of that process will remain unsound to the extent that further issues may arise. He adds that automation from a virtual machine level will fail to allocate its resources adequately to the key applications and this will have a negative impact on response times and throughput – leading to poor performance.
Possible solutions
According to Hendryx, VCE has ensured customers have visibility within a virtualised and converged cloud environment by deploying VMWare’s vCenter Operations Manager to monitor the Vblock’s resource utilisation. He adds that “VMware’s Hyperic and Infrastructure Navigator has provided them with the visibility of virtual machine to application mapping as well as application performance monitoring, to give them the necessary correlation between applications, operating system, virtual machine and server…” It also offers them the visibility that has been so lacking.
Archie Hendryx then concluded with best practices for virtualisation within a converged infrastructure:
1. If it’s successful and repeatable, then it’s worth standardising and automating because automation will enable you to make successful processes repeatable.
2.  Orchestrate it because even when a converged infrastructure is deployed there will still need to be changes that require rolling out; such as operating system updates, capacity changes, security events, load-balancing or application completions. These will all need to be placed in a certain order and you can automate the orchestration process.
3.  Simplify the front end by recognising that virtualisation has transformed your environment into a resource pool that end users should be able to request and provision for themselves and be consequently charged for. This may involve eliminating manual processes in favour of automated workflows, and simplification will enable a business to recognise the benefits of virtualisation.
4.  Manage and monitor: You can’t manage and monitor what you can’t see. For this reason VCE customers have an API that provides visibility and context to all of the individual components within a Vblock. They benefit from integration with VMware’s vCenter and vCenter Operations Manager and VCE’s API called Vision IOS. From these VCE’s customers gain visibility and the ability to immediately discover, identify and validate all of the components and firmware levels within the converged infrastructure as well as monitor its end-to-end health. This helps to eliminate any bottlenecks that might otherwise occur by allowing overly provisioned resources to be reclaimed.

Read the original blog entry...

More Stories By Archie Hendryx

SAN, NAS, Back Up / Recovery & Virtualisation Specialist.

Latest Stories
Is it possible to migrate 100% of your data ecosystem to the cloud? Join Joe Caserta as he takes you on a complete journey to digital transformation mapping out on-prem data footprint and walking it to the cloud. Joe will also explain how the modern ecosystem supports Artificial Intelligence and will include business use cases to back each of his insights.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
DXWorldEXPO LLC announced today that Kevin Jackson joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Kevin L. Jackson is a globally recognized cloud computing expert and Founder/Author of the award winning "Cloud Musings" blog. Mr. Jackson has also been recognized as a "Top 100 Cybersecurity Influencer and Brand" by Onalytica (2015), a Huffington Post "Top 100 Cloud Computing Experts on Twitter" (2013) and a "Top 50 C...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of ...
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
DXWorldEXPO LLC, the producer of the world's most influential technology conferences and trade shows has announced the 22nd International CloudEXPO | DXWorldEXPO "Early Bird Registration" is now open. Register for Full Conference "Gold Pass" ▸ Here (Expo Hall ▸ Here)
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
Enterprises are universally struggling to understand where the new tools and methodologies of DevOps fit into their organizations, and are universally making the same mistakes. These mistakes are not unavoidable, and in fact, avoiding them gifts an organization with sustained competitive advantage, just like it did for Japanese Manufacturing Post WWII.
The revocation of Safe Harbor has radically affected data sovereignty strategy in the cloud. In his session at 17th Cloud Expo, Jeff Miller, Product Management at Cavirin Systems, discussed how to assess these changes across your own cloud strategy, and how you can mitigate risks previously covered under the agreement.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.