|By Jerry Melnick||
|April 21, 2014 11:00 AM EDT||
Enterprises are moving more and more applications to the cloud. Gartner predicts that the bulk of new IT spending by 2016 will be for cloud computing platforms and applications and that nearly half of large enterprises will have cloud deployments by the end of 2017.1
The far-reaching impact of cloud computing is summarized in a recent McKinsey report on disruptive technologies: "Cloud technology has the potential to improve productivity across $3 trillion in global enterprise IT spending, as well as enabling the creation of new online products and services for billions of consumers and millions of businesses alike."2
For many organizations, moving applications that can tolerate brief periods of downtime to the cloud is a straightforward decision with clear benefits. However, concerns about how to provide high availability and disaster protection in the cloud may make this decision more difficult for business-critical applications such as SQL, SAP, and Exchange. Understanding the facts about HA and DR in the cloud can help you make informed decisions about moving applications to the cloud, while ensuring the important business operations that depend on them are protected from downtime and data loss.
Fact #1: You need high availability protection in a cloud.
Do not assume that your cloud environment provides high availability protection, unless you have specifically configured it for HA. In fact, according to a recent study: "The average unavailability of cloud services is 10 hours per year or more, while the average availability is estimated to be 99.9% far less than the expected availability of business critical applications."3 That is the equivalent of more than a day of downtime. In fact, in 2014, Microsoft Windows Azure, Google, and Amazon Web Services all had some measure of service interruptions or downtime ranging from 4 minutes to several hours.4
For business critical applications, the redundancy that you can get with some cloud solutions, such as Windows Azure, is not enough. When you consider the cost of a minute of downtime for applications, such as SQL Server, Oracle, and SAP that may run many of your key business processes, it becomes clear that you need true high availability and disaster recovery protection. You need to ensure that end users have immediate access to data and applications in the event of a local failure, a regional disaster or anything in between.
However, the traditional way of providing high availability protection is to build a cluster using two identical servers - a primary server and a standby server - with shared (typically SAN) storage. If the primary server fails, the application operation is moved to the standby server, which has immediate access to the same storage. The problem is that SANs are not only expensive to buy, manage, and maintain, they are simply not an option in public cloud offerings. There are, however, high availability solutions that can be used in a cloud that do not require a SAN.
Fact #2: You can build a cluster in a cloud.
Even though you cannot have a SAN in a cloud, you can build a cluster for high availability protection. In a Windows cloud, you simply add SANLess cluster software to your Windows Server Failover Cluster (WSFC). The SANLess software uses real time, block level replication to keep local storage in two geographic regions of the cloud synchronized. If there is an outage, the application operation is automatically moved to the remote instance, which has immediate access to current data. The synchronized storage looks to the WSFC like a traditional shared storage so there is no added complexity or specialized skills needed to build or manage a SANLess cluster. In fact, a SANLess cluster is easy to manage and has the added benefit of eliminating the single point of failure risk of a SAN. SANLess clusters also provide complete configuration flexibility, allowing you to replicate between physical, virtual, cloud, and hybrid cloud environment as well as between SAN and SANLess clusters.
Fact #3: You can have geographically separated nodes for DR in a cloud.
While providing high availability within the cloud will protect you from normal hardware failures and other unexpected outages within an availability zone (Amazon) or fault domain (Azure), you still need to protect against regional disasters. The easiest solution is to configure a multisite (geographically separated) cluster.
One effective method is to build a SANLess cluster within a cloud and extend it for disaster recovery by adding another node(s) in an alternate data center or a different geographic region within the cloud. Unlike traditional clusters that require you to have identical hardware and software in every node, a SANLess cluster allows you to mix physical, cloud and hybrid cloud configurations. The benefits of a DR configuration are clear. For example, simply adding a third, geographically separated node to your SANLess cluster in a Windows Azure cloud can give you a recovery point objective (RPO) of near zero data loss and a recovery time objective (RTO) of just about one minute.
Fact #4: You can create a cluster that mixes cloud and on-premises nodes.
You can use your on-premises data center as your primary location with a failover cluster to provide high availability protection and use the cloud as your hot standby DR site. This is a very cost-effective alternative to building out your own DR site, or renting rack space in a business continuity facility. In this case, the on-premises servers can be your choice of traditional SAN-based clusters, SANLess clusters, or even single servers not currently participating in a cluster.
The objective of having a "hot" standby DR site is to have standby servers up and running as quickly as possible in the DR site with access to a copy of the most recent application data. In the event of a disaster, recovery is automatic and immediate. A multisite cluster is an effective way to implement a hot standby DR site. In this case, the SANLess date. In the event of a forecasted disaster, such as a storm or a flood, applications can be moved to the cloud before potential disaster strikes. In the event of an unexpected disaster, applications can be recovered manually or in some cases automatically, depending upon the quorum configuration. This mix of cloud and on-premises nodes gives you an excellent RTO and RPO with minimal investment in infrastructure.
Fact #5: HA and DR in a cloud can be easy and highly cost-effective.
If you choose a SANLess software that provides an intuitive configuration interface, you can create a standard WSFC in a cloud in minutes without specialized skills. A SANLess cluster can help you realize significant cost savings in several ways. First, in a Microsoft SQL Server environment a SANLess cluster can give you high availability with SQL Server Standard Edition software licenses without requiring you to upgrade to costly SQL Server Enterprise Edition.
Second, you can realize hundreds of thousands of dollars in savings with a SANLess by eliminating the total cost of ownership (TCO) associated with a SAN. The savings in TCO include the SAN hardware acquisition costs; the power, cooling, and data center floor space costs; and the ongoing labor cost of specialized SAN administration.
If you are thinking about moving your important applications to the cloud, you need to consider how you will protect those applications from downtime and data loss. While traditional SAN-based clusters are not possible in these environments, SANLess clusters can provide an easy, cost-efficient alternative. These clusters not only provide high availability protection, but also enable significantly greater configuration flexibility and potentially dramatic savings in both licensing costs and SAN TCO.
2 Manyika, James and Michael Chui, et al, "Disruptive technologies: Advances that will transform life, business, and the global economy," McKinsey Global Institute (May 2013)
3Whittaker, Josh, "Amazon Web Services Suffers Outage, Takes Out Vine, Instagram, Others with it," ZDNet, (August 26, 2013)
4Mackay, Martin, "Downtime Report: Top Ten Outages in 2013," Business2Community.com, (December 2013)
Continuous testing helps bridge the gap between developing quickly and maintaining high quality products. But to implement continuous testing, CTOs must take a strategic approach to building a testing infrastructure and toolset that empowers their team to move fast. Download our guide to laying the groundwork for a scalable continuous testing strategy.
Jul. 23, 2016 09:00 PM EDT Reads: 1,800
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
Jul. 23, 2016 08:45 PM EDT Reads: 1,629
CenturyLink has announced that application server solutions from GENBAND are now available as part of CenturyLink’s Networx contracts. The General Services Administration (GSA)’s Networx program includes the largest telecommunications contract vehicles ever awarded by the federal government. CenturyLink recently secured an extension through spring 2020 of its offerings available to federal government agencies via GSA’s Networx Universal and Enterprise contracts. GENBAND’s EXPERiUS™ Application...
Jul. 23, 2016 08:30 PM EDT Reads: 1,760
"My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 23, 2016 08:30 PM EDT Reads: 2,007
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
Jul. 23, 2016 08:00 PM EDT Reads: 2,400
Big Data engines are powering a lot of service businesses right now. Data is collected from users from wearable technologies, web behaviors, purchase behavior as well as several arbitrary data points we’d never think of. The demand for faster and bigger engines to crunch and serve up the data to services is growing exponentially. You see a LOT of correlation between “Cloud” and “Big Data” but on Big Data and “Hybrid,” where hybrid hosting is the sanest approach to the Big Data Infrastructure pro...
Jul. 23, 2016 08:00 PM EDT Reads: 1,771
In his session at 18th Cloud Expo, Sagi Brody, Chief Technology Officer at Webair Internet Development Inc., and Logan Best, Infrastructure & Network Engineer at Webair, focused on real world deployments of DDoS mitigation strategies in every layer of the network. He gave an overview of methods to prevent these attacks and best practices on how to provide protection in complex cloud platforms. He also outlined what we have found in our experience managing and running thousands of Linux and Unix ...
Jul. 23, 2016 07:45 PM EDT Reads: 1,653
The IoT is changing the way enterprises conduct business. In his session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, discussed how businesses can gain an edge over competitors by empowering consumers to take control through IoT. He cited examples such as a Washington, D.C.-based sports club that leveraged IoT and the cloud to develop a comprehensive booking system. He also highlighted how IoT can revitalize and restore outdated business models, making them profitable ...
Jul. 23, 2016 07:15 PM EDT Reads: 1,860
We all know the latest numbers: Gartner, Inc. forecasts that 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from last year, and will reach 20.8 billion by 2020. We're rapidly approaching a data production of 40 zettabytes a day – more than we can every physically store, and exabytes and yottabytes are just around the corner. For many that’s a good sign, as data has been proven to equal money – IF it’s ingested, integrated, and analyzed fast enough. Without real-ti...
Jul. 23, 2016 07:00 PM EDT Reads: 837
"We view the cloud not really as a specific technology but as a way of doing business and that way of doing business is transforming the way software, infrastructure and services are being delivered to business," explained Matthew Rosen, CEO and Director at Fusion, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 23, 2016 07:00 PM EDT Reads: 1,422
"Software-defined storage is a big problem in this industry because so many people have different definitions as they see fit to use it," stated Peter McCallum, VP of Datacenter Solutions at FalconStor Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 23, 2016 06:30 PM EDT Reads: 1,336
I wanted to gather all of my Internet of Things (IOT) blogs into a single blog (that I could later use with my University of San Francisco (USF) Big Data “MBA” course). However as I started to pull these blogs together, I realized that my IOT discussion lacked a vision; it lacked an end point towards which an organization could drive their IOT envisioning, proof of value, app dev, data engineering and data science efforts. And I think that the IOT end point is really quite simple…
Jul. 23, 2016 06:15 PM EDT Reads: 772
With 15% of enterprises adopting a hybrid IT strategy, you need to set a plan to integrate hybrid cloud throughout your infrastructure. In his session at 18th Cloud Expo, Steven Dreher, Director of Solutions Architecture at Green House Data, discussed how to plan for shifting resource requirements, overcome challenges, and implement hybrid IT alongside your existing data center assets. Highlights included anticipating workload, cost and resource calculations, integrating services on both sides...
Jul. 23, 2016 06:00 PM EDT Reads: 1,874
"We are a well-established player in the application life cycle management market and we also have a very strong version control product," stated Flint Brenton, CEO of CollabNet,, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 23, 2016 05:30 PM EDT Reads: 1,709
"We provide DevOps solutions. We also partner with some key players in the DevOps space and we use the technology that we partner with to engineer custom solutions for different organizations," stated Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 23, 2016 05:30 PM EDT Reads: 1,531