Welcome!

Blog Feed Post

Data Protection Diaries: March 31 World Backup Day is Restore Data Test Time

Data Protection Diaries: March 31 World Backup Day is Restore Data Test Time

By Greg Schulz

Storage I/O trends

World Backup Day Generating Awareness About Data Protection

This World Backup Day piece is part of my ongoing Data Protection Diaries series of posts (www.dataprotecitondiaries.com) about trends, strategies, tools and best practices spanning applications, archiving, backup/restore, business continuance (BC), business resiliency (BR), cloud, data footprint reduction (DFR), security, servers, storage and virtualization among other related topic themes.

data protection threat risk scenarios
Different threat risks and reasons to protect your digital assets (data)

March 31 is World Backup Day which means you should make sure that your data and digital assets (photos, videos, music or audio, scanned items) along with other digital documents are protected. Keep in mind that various reasons for protecting, preserving and serving your data regardless of if you are a consumer with needs to protect your home and personal information, or a large business, institution or government agency.

Why World Backup Day and Data Protection Focus

By being protected this means making sure that there are copies of your documents, data, files, software tools, settings, configurations and other digital assets. These copies can be in different locations (home, office, on-site, off-site, in the cloud) as well as for various points in time or recovery point objective (RPO) such as monthly, weekly, daily, hourly and so forth.

Having different copies for various times (e.g. your protection interval) gives you the ability to go back to a specific time to recover or restore lost, stolen, damaged, infected, erased, or accidentally over-written data. Having multiple copies is also a safeguard incase either the data, files, objects or items being backed up or protected are bad, or the copy is damaged, lost or stolen.

Restore Test Time

While the focus of world backup data is to make sure that you are backing up or protecting your data and digital assets, it is also about making sure what you think is being protected is actually occurring. It is also a time to make sure what you think is occurring or know is being done can actually be used when needed (restore, recover, rebuild, reload, rollback among other things that start with R). This means testing that you can find the files, folders, volumes, objects or data items that were protected, use those copies or backups to restore to a different place (you don’t want to create a disaster by over-writing your good data).

In addition to making sure that the data can be restored to a different place, go one more step to verify that the data can actually be used which means has it be decrypted or unlocked, have the security or other rights and access settings along with meta data been applied. While that might seem obvious it is often the obvious that will bite you and cause problems, hence take some time to test that all is working, not to mention get some practice doing restores.

Data Protection and Backup 3 2 1 Rule and Guide

Spiceworks backup data protection

Recently I did a piece based on my own experiences with data protection including Backup as well as Restore over at Spiceworks called My copies were corrupted: The 3-2-1 rule. For those not familiar, or as a reminder 3 2 1 means have more than three copies or better yet, versions stored on at least two different devices, systems, drives, media or mediums in at least one different location from the primary or main copy.

Following is an excerpt from the My copies were corrupted: The 3-2-1 rule piece:

Not long ago I had a situation where something happened to an XML file that I needed. I discovered it was corrupted, and I needed to do a quick restore.

“No worries,” I thought, “I’ll simply copy the most recent version that I had saved to my file server.” No such luck. That file had been just copied and was damaged.

“OK, no worries,” I thought. “That’s why I have a periodic backup copy.” It turns out that had worked flawlessly. Except there was a catch — it had backed up the damaged file. This meant that any and all other copies of the file were also damaged as far back as to when the problem occurred.

Read the full piece here.

Backup and Data Protection Walking the Talk

Yes I eat my own dog food meaning that I practice what I talk about (e.g. walking the talk) leveraging not just a  3 2 1 approach, actually more of a 4 3 2 1 hybrid which means different protection internals, various retention's and frequencies, not all data gets treated the same, using local disk, removable disk to go off-site as well as cloud. I also test candidly more often by accident using the local, removable and cloud copies when I accidentally delete something, or save the wrong version.

Some of my data and applications are protected throughout the day, others on set schedules that vary from hours to days to weeks to months or more. Yes, some of my data such as large videos or other items that are static do not change, so why backup them up or protect every day, week or month? I also align the type of protection, frequency, retention to meet different threat risks, as well as encrypt data. Part of actually testing and using the restores or recoveries is also determining what certificates or settings are missing, as well as where opportunities exist or needed to enhance data protection.

Closing comments (for now)

Take some time to learn more about data protection including how you can improve or modernize while rethinking what to protect, when, where, why how and with what.

In addition to having copies from different points in time and extra copies in various locations, also make sure that they are secured or encrypted AND make sure to protect your encryption keys. After all, try to find a digital locksmith to unlock your data who is not working for a government agency when you need to get access to your data ;)...

Learn more about data protection including Backup/Restore at www.dataprotectiondiaries.com where there are a collection of related posts and presentations including:

Also check out the collection of technology and vendor / product neutral data protection and backup/restore content at BackupU (disclosure: sponsored by Dell Data Protection Software) that includes various webinars and Google+ hangout sessions that I have been involved with.

Watch for more data protection conversations about related trends, themes, technologies, techniques perspectives in my ongoing data protection diaries discussions as well as read more about Backup and other related items at www.dataprotectiondiaries.com.

Ok, nuff said

Cheers
Gs

Greg Schulz - Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2014 StorageIO All Rights Reserved

Read the original blog entry...

More Stories By Greg Schulz

Greg Schulz is founder of the Server and StorageIO (StorageIO) Group, an IT industry analyst and consultancy firm. Greg has worked with various server operating systems along with storage and networking software tools, hardware and services. Greg has worked as a programmer, systems administrator, disaster recovery consultant, and storage and capacity planner for various IT organizations. He has worked for various vendors before joining an industry analyst firm and later forming StorageIO.

In addition to his analyst and consulting research duties, Schulz has published over a thousand articles, tips, reports and white papers and is a sought after popular speaker at events around the world. Greg is also author of the books Resilient Storage Network (Elsevier) and The Green and Virtual Data Center (CRC). His blog is at www.storageioblog.com and he can also be found on twitter @storageio.

Latest Stories
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
Up until last year, enterprises that were looking into cloud services usually undertook a long-term pilot with one of the large cloud providers, running test and dev workloads in the cloud. With cloud’s transition to mainstream adoption in 2015, and with enterprises migrating more and more workloads into the cloud and in between public and private environments, the single-provider approach must be revisited. In his session at 18th Cloud Expo, Yoav Mor, multi-cloud solution evangelist at Cloudy...
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
The proper isolation of resources is essential for multi-tenant environments. The traditional approach to isolate resources is, however, rather heavyweight. In his session at 18th Cloud Expo, Igor Drobiazko, co-founder of elastic.io, drew upon his own experience with operating a Docker container-based infrastructure on a large scale and present a lightweight solution for resource isolation using microservices. He also discussed the implementation of microservices in data and application integrat...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, will explore the value of Kibana 4 for log analysis and will give a real live, hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He will examine three use cases: IT operations, business intelligence, and security and compliance. This is a hands-on session that will require participants to bring their own laptops, and we will provide the rest.
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
"LinearHub provides smart video conferencing, which is the Roundee service, and we archive all the video conferences and we also provide the transcript," stated Sunghyuk Kim, CEO of LinearHub, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Updating DevOps to the latest production data slows down your development cycle. Probably it is due to slow, inefficient conventional storage and associated copy data management practices. In his session at @DevOpsSummit at 20th Cloud Expo, Dhiraj Sehgal, in Product and Solution at Tintri, will talk about DevOps and cloud-focused storage to update hundreds of child VMs (different flavors) with updates from a master VM in minutes, saving hours or even days in each development cycle. He will also...
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...