Welcome!

Related Topics: Containers Expo Blog, Java IoT, Linux Containers, Open Source Cloud, SDN Journal

Containers Expo Blog: Article

The Brave New World of Storage Virtualization

Can you put your virtual environment on autopilot?

I recently found myself intrigued by an article by Jon William Toigo on Tech Target titled - Software-defined infrastructure or how storage becomes software. In his article, Toigo poses the question: Could a software-defined infrastructure, with software-based controls and policies, be the answer to managing and allocating storage? While I am sure Jon would agree we're all tired of the buzz words around "software-defined-anything", the fact of the matter is that we all use it anyway for lack of a better description of what's occurring today.

Storage specifically is one of those areas I always say is like organizing your room: everyone has their own way of doing it.  You want your dresser to be a certain height.  You want you mattress pointing a certain direction and the light shining through your window on your desk at just the right time of day.  The truth is, storage is the under-pinning of virtualization that everyone wants to architect and manage the way they want to, and no one is going to tell them otherwise... UNTIL - wait for it - ...software can manage this FOR people.

Where is the robot (I wish the Roomba folks would hurry up with this, but I digress;) that keeps your room exactly the way you want it to be?  The industry seems constantly to toss this idea around, but no one really seems to know where to find the solution.  The truth is that no matter what quirks people have with their storage, there is one goal everyone has in common: Ensuring applications have ability to consume the storage resources they need while preserving priority and business logic; as well as increasing efficiency without introducing risk.  This is the promise of every storage vendor on the planet trying to sell their latest auto-tiering, de-dupe, compression solution and all the wonderful bling to trick out their man cave.

The problem, however, is that virtualization obscures the lines with storage and it becomes intractably complex to manage the macro-level supply and demand of resources occurring across the stack. Traditional management tools - storage- and virtualization-related - are running into the limitations of their stats-based, linear approach to managing diverse environments.

An Illustration
As an illustration, let's use a real-life example. Let's assume we have a NetApp environment supporting VMWare.  The NetApp consist of 4 Aggregates spread across two filers.

  • Aggr 1 SATA and Aggr 2 SATA are on Controller A, and each Aggr comprises of 2 Volumes in VMWare
  • Aggr 3 SAS and Aggr 4 SAS are on Controller B, and each Aggr comprises of 2 Volumes in VMWare

Aggr1 sees a spike in IOPS driven by 3 virtual machines demanding IOPS on its 2 Volumes.  This results in Aggr1 utilizing 93% of its available IOPS capacity due to several high consumers, or "Bully" VMs (to use a traditional storage vendor's language).  To compound the issue, the high utilization on disk has now manifested itself up the stack and begins to impact the ability of other workloads to access the storage resources they require on Aggr1.

BraveNew-a

In this example, a traditional software management system for the storage platform will alert an administrator that the Aggr1 has exceeded a tolerable utilization on IOPS and that it is time for an administrator to act.  Similarly, the virtualization vendor (in this case VMWare), will generate alarms related to the virtualized components layered on top of the storage platform.

BraveNew-b

The administrator must then siphon through the charts and graphs in their storage vendor's tool, or their virtualization management system, with the end goal being some sort of resource allocation decision to intelligently allocate storage resources to the applications that need them while avoiding quality of service disruption at the expense of low-priority applications.

More likely the administrator needs to access both of these interfaces to try and accomplish this.  In this example, the resource decision might be to move the volume to separate aggregate on Controller A (when it reality this won't do much due to performance constraints underneath), move the volume itself to faster disk associated with a separate storage controller, or move the virtual machine to a volume hosted on Controller B.

BraveNew-c

Now the second part of this equation (and arguably more difficult to get right) is: How does the administrator ensure that the domino they decide to push over doesn't create another resource constraint within the environment?  Fundamentally, traditional storage vendor software offerings and virtualization management tools are incapable of understanding the impact and outcome of any prospective resolution because they simply do not analyze the interdependencies of this decision across both (virtualized) compute and storage components.  The best case for operations is a head start on the troubleshooting process after quality of service has already been impacted or is in the process of being degraded.

Are Things Even at Human Scale Anymore?
In order to truly accomplish a software-defined storage system, there needs to be a new type of management system capable of connecting these two obscure worlds for the purpose of intelligent decision making and resource allocation.

Toigo paints this gap perfectly when he states, "Our storage needs to be managed and allocated by intelligent humans, with software-based controls and policies serving as a more efficient extension of our ability to translate business needs into automation support."

Following this logic, this new system must go above and beyond looking at application issues in isolation to determine how to properly allocate the infrastructure's entire supply of finite storage resources to every virtualized workload and application - at scale.  Inevitably, this means looking across all application resource demands concurrently and then determining how to service each application's request for the best cost/benefit to the overall platform by allocating the supply of storage resource intelligently and in the most efficient way.   Ideally, this will be done prescriptively - before quality of service is degraded.

The second phase of this brave new world will involve incorporating business logic that allows the software-driven control plane to consider business constraints alongside of capacity and performance metrics in real time.   If tier 1 applications need to have priority for faster disk over low-priority applications, then the system should be set it and forget it.  If tier 3 applications must be confined to bronze or slow storage, then the constraint should carry over dynamically for any workload matching this criterion that is provisioned across the lifecycle of the environment.

If 20% overhead needs to be maintained across tier-1 storage resources, then software should be intelligent enough to control utilization below this level, instead of notifying administrators once they have crossed it and forcing them to bring the infrastructure back from the brink.

The reality is that everyone has their own idea how to best trick out their room - in this case, their precious storage. Administrators will never truly be comfortable with putting their storage architecture on auto-pilot until they can rest assured that their policies are maintained while assuring application performance.  Any system developed to tackle this brave new world, must be able to solve both of these goals simultaneously - a challenge that Toigo argues is beyond human capacity to do so at scale.

More Stories By Eric Bannon

A passion for econometric analysis and statistical modeling led Eric to… wait for it… software. Eric discovered that by leveraging IT algorithms, based on the principles of supply and demand, software can solve some of the biggest challenges in infrastructure and cloud management today.

Joining VMTurbo in 2011, Eric now serves as a Solution Architect, where he helps organizations unlock the full value of virtualization through the implementation of software-defined control. He holds a B.S. in Economics and Finance from Bentley University, and still likes to deconstruct James Heckman’s econometric models in his free time.

Latest Stories
"When you think about the data center today, there's constant evolution, The evolution of the data center and the needs of the consumer of technology change, and they change constantly," stated Matt Kalmenson, VP of Sales, Service and Cloud Providers at Veeam Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science" is responsible for guiding the technology strategy within Hitachi Vantara for IoT and Analytics. Bill brings a balanced business-technology approach that focuses on business outcomes to drive data, analytics and technology decisions that underpin an organization's digital transformation strategy.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER gives detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPOalso offers sp...
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
SYS-CON Events announced today that IoT Global Network has been named “Media Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. The IoT Global Network is a platform where you can connect with industry experts and network across the IoT community to build the successful IoT business of the future.
DXWorldEXPO LLC announced today that Kevin Jackson joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Kevin L. Jackson is a globally recognized cloud computing expert and Founder/Author of the award winning "Cloud Musings" blog. Mr. Jackson has also been recognized as a "Top 100 Cybersecurity Influencer and Brand" by Onalytica (2015), a Huffington Post "Top 100 Cloud Computing Experts on Twitter" (2013) and a "Top 50 C...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.