|By Don MacVittie||
|May 11, 2010 03:26 PM EDT||
I had just finished writing this blog and was about to post it, taking that moment to go do other things before one final read-through, and guess what? I saw this article on Dell’s site cross my Twitter account. I’ll blame the fact that I’m the shiny new Technical Marketing Manager as the reason I did not know this was going on. I thought about just ditching the blog at that point… After all, the article linked to on Dell’s site is much more in-depth. But the blog was done. And I think it still adds value in the generic replication sense. So I leave it as-is. Though I suggest that if you’re interested in speeding replications between EqualLogic boxes, or in how F5 specifically speeds up massive data transfers, the article linked above is worth a read also.
The geeks over at Dell recently did a Tech Report on replication between two Dell EqualLogic Groups, with the claim that it didn’t matter where those two groups were geographically located. Like their claim in the same PDF that what they’re doing – which by description sounds suspiciously like everyone else’s snapshotting technology – is replication, I had to congenially come to the conclusion that I disagreed with their contention that it didn’t matter where the two groups were. Don’t get me wrong, this is an excellent tech article, and if you’re a customer (or considering becoming one), it’s worth a read. Just hit a couple of items that felt more like marketing than tech to me.
Remote replication – and remote snapshotting since they share the fact that they ship a lot of data over the wire – has always been bedeviled by the simple truth of the Internet. Latency, retransmits, bandwidth, and security all interfere with what would be a simple procedure on the LAN.
Of course for every problem in technology there are people to solve it, and speeding remote replication is not a new problem, so there are a large pool of vendors out there to solve it. Chances are pretty good that at least one of your SAN/NAS/backup vendors solves it between their equipment. That’s the age old struggle in storage though, homogeneous or heterogeneous storage infrastructure. With homogeneous you get undeniable benefits like faster replication, but you only get it for select equipment, which is rarely ideal.
For those that do not know, the key points that interfere with replication are actually pretty straight-forward and have been known entities for a rather long time (contrary to some of the hype I see out there now, particularly in the “cloud gateway” category), and have been resolved by a decent number of organizations. They are as follows…
- TCP Chattiness
- Packet loss/latency
- Data Volume in replication/snapshot
- Protocol overhead
The first point is simple acknowledgment that TCP chatters a lot maintaining a connection, and some of that overhead doesn’t necessarily have to cross the wire.
The second are the perennial problems of long distance computer communications. TCP is designed to deal with them, but the solutions create more chattiness and redundant transmissions. The worse the connection is, the more overhead is introduced… Which of course makes the connection appear even worse.
Data volume inevitably plagues remote data transfers. There are two sides to this issue that you have to contend with. The first is the window of time it takes to do the backup, the second is the usage of the connection into and out of the target data centers. Most organizations cannot have replication or snapshots taking up 100% – or even 50% – of their bandwidth.
Of course, the app layer protocol that you’re using – NDMP, CIFS, NFS, iSCSI, any of the half-dozen others – has overhead also. That overhead ranges from not a lot to outrageous, and slows down the actual copying of data.
And perhaps the most painful of all, you can’t ship that data over the Internet in the clear. If you have permanently encrypted links, this is no problem, if you have something like our iSessions that create a secure tunnel, it’s not much of a problem either. You just have to know what you need encrypted, and have a way to get it moved.
So, in short, you need answers to all of these issues. TCP chattiness can be reduced by devices that essentially locally proxy unnecessary ACK sequences, packet loss and latency can not easily be handled, but there is some hope in a symmetric solution or (far less optimally) tweaking TCP settings, data volume can be handled in de-duplication… Though there are two flavors of dedupe, those used by storage vendors that may or may not require rehydration prior to transmission for backup/replication/snapshot, and those used by network devices to transmit less data. It is a much simpler task, technologically, to pull replicated bit patterns out of a TCP stream and send a key to rehydrate instead than it is to replace a block on-disk and store that key for replacement… Well, potentially forever. And the same types of things that TCP requires to reduce chattiness will work with protocols also if needed, they just have to be developed separately. And security is listed last because it stands outside the others. All of the above can be done through an encrypted tunnel easily enough. As long as all of the above is done before the encryption takes place ;-).
You need a replication scheme that will provide timely backups, in a consistent manner, securely, with minimal impact to the machines being backed up. Whether you call your snapshots replicas or not, whether you still call it “nightly backup” or “replication”, the requirements are the same. Over the long haul solving the issues above will reduce your backup window, improve the integrity of your restore volumes, and generally allow you to sleep better at night.
While writing this blog, I read George Crump’s latest InformationWeek post where he mentions shifting the focus back to the backups and away from the restores. I agree, he’s hit the nail on the head. Make a solid backup, then worry about restoring it, for you’ll have nothing to restore without a workable, reliable, and timely backup/replica/snapshot.
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Analytic. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
Apr. 24, 2017 05:15 AM EDT
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
Apr. 24, 2017 05:00 AM EDT Reads: 1,672
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Apr. 24, 2017 04:45 AM EDT Reads: 108
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in compute, storage and networking technologies, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/...
Apr. 24, 2017 04:15 AM EDT Reads: 1,811
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
Apr. 24, 2017 04:15 AM EDT Reads: 149
Amazon has gradually rolled out parts of its IoT offerings in the last year, but these are just the tip of the iceberg. In addition to optimizing their back-end AWS offerings, Amazon is laying the ground work to be a major force in IoT – especially in the connected home and office. Amazon is extending its reach by building on its dominant Cloud IoT platform, its Dash Button strategy, recently announced Replenishment Services, the Echo/Alexa voice recognition control platform, the 6-7 strategic...
Apr. 24, 2017 04:00 AM EDT Reads: 4,810
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
Apr. 24, 2017 02:45 AM EDT Reads: 808
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
Apr. 24, 2017 02:45 AM EDT Reads: 854
Everyone wants to use containers, but monitoring containers is hard. New ephemeral architecture introduces new challenges in how monitoring tools need to monitor and visualize containers, so your team can make sense of everything. In his session at @DevOpsSummit, David Gildeh, co-founder and CEO of Outlyer, will go through the challenges and show there is light at the end of the tunnel if you use the right tools and understand what you need to be monitoring to successfully use containers in your...
Apr. 24, 2017 01:15 AM EDT Reads: 1,693
Developers want to create better apps faster. Static clouds are giving way to scalable systems, with dynamic resource allocation and application monitoring. You won't hear that chant from users on any picket line, but helping developers to create better apps faster is the mission of Lee Atchison, principal cloud architect and advocate at New Relic Inc., based in San Francisco. His singular job is to understand and drive the industry in the areas of cloud architecture, microservices, scalability ...
Apr. 24, 2017 01:00 AM EDT Reads: 3,225
Data is an unusual currency; it is not restricted by the same transactional limitations as money or people. In fact, the more that you leverage your data across multiple business use cases, the more valuable it becomes to the organization. And the same can be said about the organization’s analytics. In his session at 19th Cloud Expo, Bill Schmarzo, CTO for the Big Data Practice at Dell EMC, introduced a methodology for capturing, enriching and sharing data (and analytics) across the organization...
Apr. 24, 2017 12:30 AM EDT Reads: 6,335
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
Apr. 24, 2017 12:00 AM EDT Reads: 1,052
Grape Up is a software company, specialized in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the USA and Europe, we work with a variety of customers from emerging startups to Fortune 1000 companies.
Apr. 23, 2017 11:45 PM EDT Reads: 1,997
Financial Technology has become a topic of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 20th Cloud Expo at the Javits Center in New York, June 6-8, 2017, will find fresh new content in a new track called FinTech.
Apr. 23, 2017 11:30 PM EDT Reads: 2,194
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add...
Apr. 23, 2017 11:00 PM EDT Reads: 1,606