Welcome!

Related Topics: @CloudExpo, Linux Containers, Containers Expo Blog, @DXWorldExpo, SDN Journal

@CloudExpo: Article

Performance: The Key to Data Efficiency By @Permabit | @CloudExpo [#Cloud]

Data efficiency encompasses a variety of different technologies that enable the most effective use of space on a storage device

Data efficiency - the combination of technologies including data deduplication, compression, zero elimination and thin provisioning - transformed the backup storage appliance market in well under a decade. Why has it taken so long for the same changes to occur in the primary storage appliance market? The answer can be found by looking back at the early evolution of the backup appliance market, and understanding why EMC's Data Domain continues to hold a commanding lead in that market today.

Data Efficiency Technologies
The term "data efficiency" encompasses a variety of different technologies that enable the most effective use of space on a storage device by both reducing wasted space and eliminating redundant information. These technologies include thin provisioning, which is now commonplace in primary storage, as well as less extensively deployed features such as compression and deduplication.

Compression is the use of an algorithm to identify data redundancies within a small distance, for example, finding repeated words within a 64 KB window. Compression algorithms often take other steps to increase the entropy (or information density) of a set of data such as more compactly representing parts of bytes that change rarely, like the high bits of a piece of ASCII text. These sorts of algorithms always operate "locally", within a data object like a file, or more frequently on only a small portion of that data object at a time. As such, compression is well suited to provide savings on textual content, databases (particularly NoSQL databases), and mail or other content servers. Compression algorithms typically achieve a savings of 2x to 4x on such data types.

Deduplication, on the other hand, identifies redundancies across a much larger set of data, for example, finding larger 4 KB repeats across an entire storage system. This requires both more memory and much more sophisticated data structures and algorithms, so deduplication is a relative newcomer to the efficiency game compared to compression. Because deduplication has a much greater scope, it has the opportunity to deliver much greater savings - as much as 25x on some data types. Deduplication is particularly effective on virtual machine images as used for server virtualization and VDI, as well as development file shares. It also shows very high space savings in database environments as used for DevOps, where multiple similar copies may exist for development, test and deployment purposes.

The Evolution of Data Efficiency
In less than ten years, data deduplication and compression shifted billions of dollars of customer investment from tape-based backup solutions to purpose-built disk-based backup appliances. The simple but incomplete reason for this is that these technologies made disk cheaper to use for backup. While this particular aspect enabled the switch to disk, it wasn't the driver for the change.

The reason customers switched from tape to disk was that backup and particularly restore to and from disk, respectively, is much, much faster. Enterprise environments were facing increasing challenges in meeting their backup windows, recovery point objectives, and (especially) recovery time objectives with tape-based backup systems. Customers were already using disk-based backup in critical environments, and they were slowly expanding the use of disk as the gradual price decline of disk allowed.

Deduplication enabled a media transition for backup by dramatically changing the price structure for disk-based vs tape. While the disk-based backup is still more expensive, deduplication has made it faster and better.

It's also worth noting that Data Domain, the market leader early on, still commands a majority share of the market. This can be partially explained by history, reputation and the EMC sales machine, but other early market entrants including Quantum, Sepaton and IBM have struggled to gain share, so this doesn't fully explain Data Domain's prolonged dominance.

The rest of the explanation is that deduplication technology is extremely difficult to build well, and Data Domain's product is a solid solution for disk-based backup. In particular, it is extremely fast for sequential write workloads like backup, and thus doesn't compromise performance of streaming to disk. Remember, customers aren't buying these systems for "cheap disk-based backup;" they're buying them for "affordable, fast backup and restore." Performance is the most important feature. Many of the competitors are still delivering the former - cost savings - without delivering the golden egg, which is actually performance.

Lessons for Primary Data Efficiency
What does the history of deduplication in the backup storage market teach us about the future of data efficiency in the primary storage market? First, we should note that data efficiency is catalyzing the same media transition in primary storage as it did in backup, on the same timeframe - this time from disk to flash, instead of tape to disk.

As was the case in backup, cheaper products aren't the major driver for customers in primary storage. Primary storage solutions still need to perform as well as (or better than) systems without data efficiency, under the same workloads. Storage consumers want more performance, not less, and technologies like deduplication enable them to get that performance from flash at a price they can afford. A flash-based system with deduplication doesn't have to be cheaper than the disk-based system it replaces, but it does have to be better overall!

This also explains the slow adoption of efficiency technologies by primary storage vendors. Building compression and deduplication for fully random access storage is an extremely difficult and complex thing to do right. Doing this while maintaining performance - a strict requirement, as we learn from the history of backup - requires years of engineering effort. Most of the solutions currently shipping with data efficiency are relatively disappointing and many other vendors have simply failed at their efforts, leaving only a handful of successful products on the market today.

It's not that vendors don't want to deliver data efficiency on their primary storage, it's that they simply haven't been able to develop it so far and have underestimated the difficulty of this task.

Hits and Misses (and Mostly Misses)
If we take a look at primary storage systems shipping with some form of data efficiency today, we see that the offerings are largely lackluster. The reason that offerings with efficiency features haven't taken the market by storm is because they deliver the same thing as less successful disk backup products - cheaper storage, not better storage. Almost universally, they deliver space savings at a steep cost in performance, a tradeoff no customer wants to make. If customers simply wanted to spend less, they would buy bulk SATA disk rather than fast SAS spindles or flash.

Take NetApp, for example. One of the very first to the market with deduplication, they proved that customers wanted efficiency - but they were also quickly turned off by the limitations of the ONTAP implementation. Take a look at the NetApp's Deduplication Deployment and Implementation Guide (TR-3505). Some choice quotes include, "if 1TB of new data has been added [...], this deduplication operation takes about 10 to 12 hours to complete," and "With eight deduplication processes running, there may be as much as a 15% to 50% performance penalty on other applications running on the system." Their "50% Virtualization Guarantee* Program" has 15 pages of terms and exceptions behind that little asterisk. It's no surprise that most NetApp users choose not to turn on deduplication.

VNX is another case in point. The "EMC VNX Deduplication and Compression" white paper is similarly frightening. Compression is offered, but it's available only as a capacity tier: "compression is not suggested to be used on active datasets." Deduplication is available as a post-process operation, but "for applications requiring consistent and predictable performance [...] Block Deduplication should not be used."

Finally, I'd like to address Pure Storage, which has set the standard for offering "cheap flash" without delivering the full performance of the medium. They represent the most successful of the all-flash array offerings on the market today and have deeply integrated data efficiency features, but they struggle to meet a sustained 150,000 IOPS. Their arrays deliver a solid win on price over all of the flash arrays without optimization, but that performance is not going to tip the balance for primary in the same way Data Domain did for backup.

To be fair to the above products, there are lots of others that must have tried to build their own deduplication and simply failed to deliver something that meets their exacting business standards. IBM, EMC VMAX, Violin Memory and others surely have tried to build their own efficiency features, and have even announced promises to deliver over the years, but none have shipped to date.

Finally, there are some leaders in the primary efficiency game so far! Hitachi is delivering "Deduplication without Compromise" on their HNAS and HUS platforms, providing deduplication (based on Permabit's AlbireoTM technology) that doesn't impact the fantastic performance of the platform. This solution delivers savings and performance for file storage, although the block side of HUS still lacks efficiency features.

EMC XtremIO is another winner in the all-flash array sector of the primary storage market. XtremIO has been able to deliver outstanding performance with fully inline data deduplication capabilities. The platform isn't yet scalable or dense in capacity, but it does deliver the required savings and performance necessary to make a change in the market.

Requirements for Change
The history of the backup appliance market makes the requirement for change in the primary storage market clear. Data efficiency simply cannot compromise performance, which is the reason why a customer is buying a particular storage platform in the first place. We're seeing the seeds of this change in products like HUS and XtremIO, but it's not yet clear who will be the Data Domain of the primary array storage deduplication market. The game is still young.

The good news is that data efficiency can do more than just reduce cost; it can also increase performance as well - making a better product overall, as we saw in the backup market. Inline deduplication can eliminate writes before they ever reach disk or flash, and deduplication can inherently sequentialize writes in a way that vastly improves random write performance in critical environments like OLTP databases. These are some of the requirements for a tipping point in the primary storage market.

Data efficiency in primary storage must deliver uncompromising performance in order to be successful. At a technical level, this means that any implementation must deliver predictable inline performance, a deduplication window that spans the entire capacity of the existing storage platform, and performance scalability to meet the application environment. The current winning solutions provide some of these features today, but it remains to be seen which product will capture them all first.

Inline Efficiency
Inline deduplication and compression - eliminating duplicates as they are written, rather than with a separate process that examines data hours (or days) later - is an absolute requirement for performance in the primary storage market, just as we've previously seen in the backup market. By operating in an inline manner, efficiency operations provide immediate savings, deliver greater and more predictable performance, and allow for greatly accelerated data protection.

With inline deduplication and compression, the customer sees immediate savings because duplicate data never consumes additional space. This is critical in high data change rate scenarios, such as VDI and database environments, because non-inline implementations can run out of space and prevent normal operation. In a post-process implementation, or one using garbage collection, duplicate copies of data can pile up on the media waiting for the optimization process to catch up. If a database, VM, or desktop is cloned many times in succession, the storage rapidly fills and becomes unusable. Inline operations prevent this bottleneck, one called out explicitly in the NetApp documentation above where at most 2 TB of new data can be processed per day. In a post-process implementation a heavily utilized system may never catch up with new data written!

Inline operation also provides for the predictable, consistent performance required by many primary storage applications. In this case, deduplication and compression occur at the time of data write and are balanced with the available system resources by design. This means that performance will not fluctuate wildly as with post-process operation, where a 50% impact (or more) can be seen on I/O performance, as optimization occurs long after the data is written. Additionally, optimization at the time of data write means that the effective size of DRAM or flash caches can be greatly increased, meaning that more workloads can fit in these caching layers and accelerate application performance.

A less obvious advantage of inline efficiency is the ability for a primary storage system to deliver faster data protection. Because data is reduced immediately, it can be replicated immediately in its reduced form for disaster recovery. This greatly shrinks recovery point objectives (RPOs) as well as bandwidth costs. In comparison, a post-process operation requires either waiting for deduplication to catch up with new data (which could take days to weeks), or replicating data in its full form (which could also take days to weeks of additional time).

Capacity and Scalability
Capacity and scalability of a data efficiency solution should seem to be obvious requirements, but they're not apparent in the products in the market today. As we've seen, a storage system incorporating deduplication and compression must be a better product, not just a cheaper product. This means that it must support the same storage capacity and the performance scalability of the primary storage platforms that customers are deploying today.

Deduplication is a relative newcomer to the data efficiency portfolio, and this is largely because the system resources required, in terms of CPU and memory, are much greater than older technologies like compression. The amount of CPU and DRAM in modern platforms means that even relatively simple deduplication algorithms can now be implemented without substantial hardware cost, but they're still quite limited in the amount of storage that they can address, or the data rate that they can accommodate.

For example, even the largest systems from all-flash array vendors like Pure and XtremIO support well under 100 TB of storage capacity, far smaller than the primary storage arrays being broadly deployed today. NetApp, while they support large arrays, only identify duplicates within a very small window of history - perhaps 2 TB or smaller. To deliver effective savings, duplicates must be identified across the entire storage array, and the storage array must support the capacities that are being delivered and used in the real world. Smaller systems may be able to peel off individual applications like VDI, but they'll be lost in the noise of the primary storage data efficiency tipping point to come.

Shifting the Primary Storage Market to Greater Efficiency
A lower cost product is not sufficient to substantially change customers' buying habits, as we saw from the example of the backup market. Rather, a superior product is required to drive rapid, revolutionary change. Just as the backup appliance market is unrecognizable from a decade ago, the primary storage market is on the cusp of a similar transformation. A small number of storage platforms are now delivering limited data efficiency capabilities with some of the features required for success: space savings, high performance, inline deduplication and compression, and capacity and throughput scalability. No clear winner has yet emerged. As the remaining vendors implement data efficiency, we will see who will play the role of Data Domain in the primary storage efficiency transformation.

More Stories By Jered Floyd

Jered Floyd, Chief Technology Officer and Founder of Permabit Technology Corporation, is responsible for exploring strategic future directions for Permabit’s products, and providing thought leadership to guide the company’s data optimization initiatives. He has previously deployed Permabit’s effective software development methodologies and was responsible for developing Permabit product’s core protocol and initial server and system architectures.

Prior to Permabit, Floyd was a Research Scientist on the Microbial Engineering project at the MIT Artificial Intelligence Laboratory, working to bridge the gap between biological and computational systems. Earlier at Turbine, he developed a robust integration language for managing active objects in a massively distributed online virtual environment. Floyd holds Bachelor’s and Master’s degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close th...
Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams. In his session at 22nd Cloud Expo | DXWorld Expo, Daniel Jones, CTO of EngineerBetter, will answer: How can we improve willpower and decrease technical debt? Is the present bias real? How can we turn it to our advantage? Can you increase a team’s effective IQ? How do DevOps & Product Teams increase empathy, and what impact does empath...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He expla...
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, discussed how given the magnitude of today's application ...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
You know you need the cloud, but you're hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You're looking at private cloud solutions based on hyperconverged infrastructure, but you're concerned with the limits inherent in those technologies. What do you do?
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
Recently, WebRTC has a lot of eyes from market. The use cases of WebRTC are expanding - video chat, online education, online health care etc. Not only for human-to-human communication, but also IoT use cases such as machine to human use cases can be seen recently. One of the typical use-case is remote camera monitoring. With WebRTC, people can have interoperability and flexibility for deploying monitoring service. However, the benefit of WebRTC for IoT is not only its convenience and interopera...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...