Welcome!

Blog Feed Post

Tiering, ILM, HSM and WGAF

In 2004, SNIA defined Information Lifecycle Management (ILM) as comprising the policies, processes, practices, and tools used to align the business value of information with the most appropriate and cost effective IT infrastructure (I would have added 'for its placement' but the ILMers at SNIA are wicked fussy about inferring that ILM has ANYTHING to do with storage. –Ed.) from the time information is conceived through its final disposition. Information is aligned with business processes through management policies and service levels associated with applications, metadata, information, and data.

I like that definition – in fact, I was hanging around SNIA a lot back then kick starting the CDP working group, and may have contributed to that definition. Who knows, sounds like something I might have written- lots of warm air whooshing around those words…

In January of 2005, an unattributed author at TechTarget opined that: Tiered storage is the assignment of different categories of data to different types of storage media in order to reduce total storage cost. Categories may be based on levels of protection needed, performance requirements, frequency of use, and other considerations.

In early 2006, SNIA added that "ILM is more than Tiered Storage" and went on to suggest the need for a complex set of data classification capabilities. Whoops – too little too soon? All the data classification start-ups I knew blew up or died on the vine.

Then, according to someone unnamed author at TechTarget back in March, 2006 the definition of Information life cycle management (ILM) became:

"A comprehensive approach to managing the flow of an information system's data and associated metadata from creation and initial storage to the time when it becomes obsolete and is deleted."

I'm not as crazy about that definition – final disposition not always being obsolescence and deletion and all...

The same unknown author went on to add:

"Unlike earlier approaches to data storage management, ILM involves all aspects of dealing with data, starting with user practices, rather than just automating storage procedures, as for example, hierarchical storage management (HSM) does. Also in contrast to older systems, ILM enables more complex criteria for storage management than data age and frequency of access."

I guess I don't really understand what 'all aspects of dealing with data starting with user practices' means. Do you? I do appreciate that ILM can use more complex criteria than age and access frequency, but I didn't realize that HSM couldn't…

According to the same source, this time written in July 2001 by Gaston Navea, the definition of Hierarchical Storage Management (HSM) is:

Policy-based management of file backup and archiving in a way that uses storage devices economically and without the user needing to be aware of when files are being retrieved from backup storage media.

Gaston then adds that policies might include age or access, but also states that executables might be excluded, inferring at least that 'type' metadata is also a valid differentiator. But for me, Gaston puts the stake in the heart of HSM when he goes on to say, "The apparently available files are known as stubs and point to the real location of the file in backup storage." No wonder HSM flopped. Stubs are a nightmare. Disconnecting the location from the payload is a huge strategic mistake. Either is useless without the other, the opportunities for disconnections are ample, and the cost of disconnect is horrific. So HSM is out.

Back to the ILMers at SNIA. They believe ILM represents a holistic approach to information management. I suppose you have to appreciate anything that represents a holistic approach. But then they go on to introduce ILM 2.0 which re-introduces the concept of Information Lifecycle Management with accompanying processes and procedures absent from earlier days. I personally hate all 2.0 terminology, but that's just me.

And they warn us, even with ILM 2.0, to: Make no mistake; Information Lifecycle Management is still a difficult and challenging proposition. They go on to define ILM 2.0 as: a service management style framework for cost-effectively aligning datacenter storage, security, services, applications, and infrastructure with the business requirements for the organization's information. This model, by the way, as 100 individual elements, and comes with a checklist of 21 specific steps to follow to achieve it.

As Andy Azula says in the recent UPS whiteboard commercial, "Is anyone else getting thirsty?"

Referring to an article by George Crump on file virtualization, my F5 colleague, Don MacVittie wrote, "The other thing that made me a throw up a little in the back of my throat was his use of the dread phrase "ILM" (Information Lifecycle Management). I shudder when our marketing organization uses it too." This is memorable, and pretty witty if you ask me, especially considering that Don is officially in marketing at F5… That snark aside, Don's core point is that many of the better elements of the ILM concept have survived the gooey-glop-ulization of the term itself in the form of real solutions to actual customer problems. This takes us back to Tiering.

Tiering works. Plain and simple. If you don't believe me, watch this short powerful video of one of our customers, RHWL Architects, discussing what tiering did for them. He says something like, "All our storage problems disappeared overnight". Pretty powerful and not goo-gloppy at all. I happen to know that these folks did not follow any 21 step, 100 element service management style framework to solve 'all their storage problems overnight'. They implemented an ARX file virtualization appliance and tiered their storage – that's all.

How could anything be that simple? Ah, heck, we storage marketers have been complexifying and goo-glopping this whole issue for a decade or more. Somewhere along the line, somebody got the idea that if we make it hard, complex, and scary, then we can charge more money to fix it. (maybe this is all really my fault afterall.)

Look it's really not complicated. Sort your files. Then put some of them on one array and some on another. The rest starts to take care of itself after that. Sort them anyway you like, by age, size, owner, extension, type, whatever. Most people use 'last modified date' first, and then get more granular later, but you will figure that out on your own as you get smarter about tiering.

Tiering is sort of akin to contributing to your 401K (in USA that's a retirement savings account). You don't need to think about it too hard. Just do it. Not doing it is really stupid, and anyone who tells you not to is an idiot or worse. You don't need to have a perfectly balanced investment strategy, and a complete understanding of your financial risk profile, before you sign up for payroll deductions. All you need to know is it saves you money on taxes and gets you started on a nest egg. Once you start building up experience (and capital) you can add more finesse. Same here.

Just do it. Sort, Tier, Save. Simple – does not require arguments about definitions, and will not make you throw up in the back of your throat.

PS – Make sure you use a solid solution - like the F5 ARX - to do it. Don't use stubs. Don't use half-baked hybrid clunkers that go in and out of band (and rely on stubs even if they say they don't). Investing in a 401K works, letting Bernie Madoff manage it doesn't. Be smart. Do the right thing.

PPS - I am leaving it to you to figure out the WGAF acronym. Use your imagination.

PPPS - Admit it, you didn't know who Andy Azula is did you? See, you always learn something reading this blog...maybe not about storage, but something...

Read the original blog entry...

More Stories By Kirby Wadsworth

Kirby is widely recognized throughout the storage industry for his expertise in marketing and business strategy.

A veteran of both startups and established storage vendors, Wadsworth was a founder of Storability and served as vice president of marketing prior to its sale to StorageTek. Earlier, as vice president and general manager of Compaq's Network Storage Services Business Unit, he envisioned and introduced Compaq's Enterprise Network Storage Architecture (ENSA) which is still widely recognized today.

As vice president of marketing for Digital's Storage Business Unit, Wadsworth launched Digital's StorageWorks product line into the open systems marketplace, and led the creation and introduction of the Enterprise Storage Array product family.

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...