|By Fredrik Schmidt||
|September 4, 2014 01:00 PM EDT||
It's an oft-cited FEMA statistic that 40 percent of small businesses never reopen after a disaster. Almost certainly contributing to this eye-popping failure rate is the fact that an estimated 74 percent of small to mid-sized businesses (SMBs) lack a disaster recovery plan, while 84 percent haven't bothered to obtain disaster insurance.
On an annual basis, flooding ranks as the most common and costly type of natural disaster in America. This hurricane season alone is expected to boast 10 named tropical storms and five hurricanes. Those SMBs that don't want their data to wash away with the rest of their possessions should consider revisiting the planning process, strategies, and cloud technologies that help businesses stay afloat in the event of a disaster.
Be Proactive - Planning Improves Reaction Time and Outcomes
Most businesses naively believe that they won't experience a true disaster, or that they'll just cross that bridge when in happens. However, according to the Aberdeen Group, disaster is arguably inevitable with the average business experiencing 2.26 downtime events per year, at an average cost of $163,674. Since all businesses aren't created equal, and many are constrained by dollars and cents, businesses must carefully calibrate their strategy for response to be commensurate with their risk. A well thought out business continuity and disaster recovery plan that takes into account the following six steps can make all the difference between a business that is poised for continued success, and one that is done in by something going awry:
Step 1: Identify mission-critical applications and data by performing a risk assessment and business impact analysis (BIA) as well as taking an asset inventory. This analysis will allow the business to calculate the potential impact of its most likely threats and prioritize their response accordingly.
Step 2: Estimate when operations should/will resume by measuring success against established RTO/RPO through frequent testing to ensure flexibility in the disaster recovery technology solution(s) the business chooses. Companies should keep in mind that, while accurate RTO/RPOs are easy to attain with specific systems, they are very hard to meet when data becomes more agnostic and distributed across more systems.
Step 3: Identify a backup worksite in the event the business becomes unsafe.
Step 4: Design and publish the business continuity plan/disaster recovery plan (BCP/DRP), making sure it is accessible to everyone from anywhere.
Step 5: Make sure employees know about the plan and are familiar with it. When disaster strikes, businesses with a clearly defined plan and consistent communication can more quickly cut through the chaos and get back on track.
Step 6: Businesses should regularly put their BCP/DRP to the test to ensure it meets current and future needs, and to expose areas that can be updated for improvement. With the maturity of cloud solutions, you also have to take into account migrating and living with a more diverse private/public cloud portfolio. Ensure you have a go-to plan for the future to expand into new things like high availability, and archiving within clouds.
Disaster-Proofing Strategies & Technologies for Every Situation and Budget
First, let's get one thing out of the way: tape is not a recovery solution. With the dizzying pace of innovation, today's modern business continuity and disaster recovery solutions available in the cloud have proliferated almost beyond count. This is making it challenging for a business to hone in on the right solutions, so let's take a quick look at some solutions that can fit various needs and budgets.
In the SMB market, one strategy we are increasingly seeing is "community cloud" initiatives, which allow businesses to share risk and pool resources to stretch their dollars. For instance, through a mutual agreement, two companies in the same region could house one of their servers in the other's data center. Thus, rather than having to set up, manage, and pay for a duplicate data center, the companies are each afforded off-site protection. However, in event of a regional disaster, such as a storm, each of the sites might be subject to identical threats. Therefore, this option should be reserved for businesses with shoestring budgets, in areas that face low regional threat risk.
Cloud-Based Disaster Recovery as a Service (DRaaS)
The emergence of cloud-based DRaaS over the past few years has been a game-changer for businesses of all sizes and budgets. DRaaS solutions can enable complete recovery of data and entire production environments in minutes - not hours or days. On a predetermined basis, a company's physical and/or virtual servers send images of their environments to the cloud of a DRaaS provider. In the event of an adverse event at the company's site, such as a storm or fire, a virtualized (and fully-operational) version of the physical server can be rapidly spun-up in the DRaaS cloud. Additionally, the affordability of DRaaS is attractive as it doesn't require capital investment and even some appliance-based solutions are offered as Hardware as a Service.
Despite its abundant benefits, though, DRaaS still doesn't solve the age-old problem of "what" a business needs to protect unless it's set up manually by default. Moreover, while advances in WAN Optimization equate to more rapid transfer of data to the cloud, there are still some bandwidth issues.
Hybrid Cloud Solutions
Hybrid cloud solutions are bursting onto the market and freeing applications from the need to be tied to a physical piece of hardware. They enable applications to live both locally, as well as in a cloud environment, at the same time. These hybrid solutions mean businesses don't have to DIY for their own disaster recovery sites and can instead designate a cloud provider to provide the infrastructure
In turn, these cloud vendors can achieve efficacies of scale and performance rarely seen with individual operators. For a small number of application-centric situations, such as finance institutions, this won't be the best solution. But for almost any other business, it leaves no reason not to move disaster recovery to the cloud.
The disaster recovery landscape continues to evolve at a breakneck pace and very soon businesses will have the ability to operate between many clouds and avoid being locked down by just one. For instance, companies can now choose solutions that enable virtual and physical loads to be recovered inside multiple cloud providers and even orchestrate cascading failovers between different providers. This freedom essentially enables businesses to place their recovery eggs in multiple baskets. In the distant future, we can logically surmise that businesses will one day be able to move between cloud providers on a whim by simply pushing a button and failing over on top of another cloud platform.
Choosing the Right Solution
Businesses can narrow their search a bit further by choosing universal products that address their basic need around recovery (failover/failback) while being as agnostic to hardware and software as possible. At the same time, they should expand their horizons and look for application-centric solutions that enable them to focus on what matters: the actual data that drives their business. Any company that maintains a business continuity plan and implements the right cloud technology to meet their needs is well on the path to saving itself from becoming another morbid statistic the next time disaster strikes.
Silver Spring Networks, Inc. (NYSE: SSNI) extended its Internet of Things technology platform with performance enhancements to Gen5 – its fifth generation critical infrastructure networking platform. Already delivering nearly 23 million devices on five continents as one of the leading networking providers in the market, Silver Spring announced it is doubling the maximum speed of its Gen5 network to up to 2.4 Mbps, increasing computational performance by 10x, supporting simultaneous mesh communic...
Feb. 13, 2016 05:00 AM EST
Eighty percent of a data scientist’s time is spent gathering and cleaning up data, and 80% of all data is unstructured and almost never analyzed. Cognitive computing, in combination with Big Data, is changing the equation by creating data reservoirs and using natural language processing to enable analysis of unstructured data sources. This is impacting every aspect of the analytics profession from how data is mined (and by whom) to how it is delivered. This is not some futuristic vision: it's ha...
Feb. 13, 2016 04:45 AM EST Reads: 464
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, will provide an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data profes...
Feb. 13, 2016 03:45 AM EST Reads: 241
Father business cycles and digital consumers are forcing enterprises to respond faster to customer needs and competitive demands. Successful integration of DevOps and Agile development will be key for business success in today’s digital economy. In his session at DevOps Summit, Pradeep Prabhu, Co-Founder & CEO of Cloudmunch, covered the critical practices that enterprises should consider to seamlessly integrate Agile and DevOps processes, barriers to implementing this in the enterprise, and pr...
Feb. 13, 2016 03:00 AM EST Reads: 463
The principles behind DevOps are not new - for decades people have been automating system administration and decreasing the time to deploy apps and perform other management tasks. However, only recently did we see the tools and the will necessary to share the benefits and power of automation with a wider circle of people. In his session at DevOps Summit, Bernard Sanders, Chief Technology Officer at CloudBolt Software, explored the latest tools including Puppet, Chef, Docker, and CMPs needed to...
Feb. 13, 2016 02:30 AM EST Reads: 368
Sensors and effectors of IoT are solving problems in new ways, but small businesses have been slow to join the quantified world. They’ll need information from IoT using applications as varied as the businesses themselves. In his session at @ThingsExpo, Roger Meike, Distinguished Engineer, Director of Technology Innovation at Intuit, showed how IoT manufacturers can use open standards, public APIs and custom apps to enable the Quantified Small Business. He used a Raspberry Pi to connect sensors...
Feb. 13, 2016 02:30 AM EST Reads: 370
Let’s face it, embracing new storage technologies, capabilities and upgrading to new hardware often adds complexity and increases costs. In his session at 18th Cloud Expo, Seth Oxenhorn, Vice President of Business Development & Alliances at FalconStor, will discuss how a truly heterogeneous software-defined storage approach can add value to legacy platforms and heterogeneous environments. The result reduces complexity, significantly lowers cost, and provides IT organizations with improved effi...
Feb. 13, 2016 12:45 AM EST Reads: 273
It's easy to assume that your app will run on a fast and reliable network. The reality for your app's users, though, is often a slow, unreliable network with spotty coverage. What happens when the network doesn't work, or when the device is in airplane mode? You get unhappy, frustrated users. An offline-first app is an app that works, without error, when there is no network connection.
Feb. 12, 2016 10:00 PM EST Reads: 245
Data-as-a-Service is the complete package for the transformation of raw data into meaningful data assets and the delivery of those data assets. In her session at 18th Cloud Expo, Lakshmi Randall, an industry expert, analyst and strategist, will address: What is DaaS (Data-as-a-Service)? Challenges addressed by DaaS Vendors that are enabling DaaS Architecture options for DaaS
Feb. 12, 2016 09:45 PM EST Reads: 386
One of the bewildering things about DevOps is integrating the massive toolchain including the dozens of new tools that seem to crop up every year. Part of DevOps is Continuous Delivery and having a complex toolchain can add additional integration and setup to your developer environment. In his session at @DevOpsSummit at 18th Cloud Expo, Miko Matsumura, Chief Marketing Officer of Gradle Inc., will discuss which tools to use in a developer stack, how to provision the toolchain to minimize onboa...
Feb. 12, 2016 09:00 PM EST Reads: 135
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed...
Feb. 12, 2016 06:00 PM EST Reads: 404
Companies can harness IoT and predictive analytics to sustain business continuity; predict and manage site performance during emergencies; minimize expensive reactive maintenance; and forecast equipment and maintenance budgets and expenditures. Providing cost-effective, uninterrupted service is challenging, particularly for organizations with geographically dispersed operations.
Feb. 12, 2016 06:00 PM EST
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee...
Feb. 12, 2016 04:45 PM EST
DevOps is not just last year’s buzzword. Companies with DevOps practices are 2.5x more likely to exceed profitability, market share, and productivity goals. But how do you enable high performance? What can you do right now to start? Find out from DevOps experts including Gene Kim, co-author of "The Phoenix Project," and the Dynatrace Center of Excellence.
Feb. 12, 2016 04:30 PM EST
With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right ...
Feb. 12, 2016 04:30 PM EST Reads: 193