|By Roger Strukhoff||
|February 14, 2014 06:30 PM EST||
Is the pendulum swinging back toward local datacenters running private cloud? Certainly some decisions I'm getting involved in indicate it is in my little part of the world.
But first let me ask, did the pendulum ever really swing toward public cloud? A survey conducted in Sept 2013 by Cloud Passage of SMBs and enterprises (more than 1,000 employees) found 21% deploying private cloud only and 13% deploying public clouds only, with 48% deploying both. The survey also found smaller businesses including public cloud more frequently.
Getting hard numbers on public vs. private is impossible. So given the one metric above and a few years' experience writing and researching the topic, I'll provide my take:
Initial enthusiasm was for public cloud taking over the world. The vision of Nic Carr's The Big Switch prevailed in cloud-related articles and speeches , in which computer resources were delivered and measured like water or electricity. This school of thought believed that there was no such thing as private cloud - if it was on-premise, it wasn't cloud.
VMware's success in virtualizing a couple billion dollars worth of datacenters per year refuted the Big Switch vision. Even though proponents have always been careful to say that virtualization alone is not cloud, it sure feels that way when your local resources are suddenly much more productive and running much hotter.
The era of hybrid cloud ensued.
Thesis, antithesis, synthesis. Kant (but not Hegel) would be proud.
I've long thought it would be very helpful if Jeff Bezos released the revenue figures for Amazon's public-cloud offerings. Surely he considers the secrecy of this information as part of his competitive advantage.
More important, I'd like to know, as a potential customer, Amazon's revenue and expenses, difficult as it may be to determine them. Because I'm getting the sneaking suspicion that not only should cost savings not be a reason to move to public cloud, but in fact, no such savings exist.
Total upfront cost aside, I made the opex vs. capex argument in favor of public cloud many times in the early days of a few years ago.
But now, I'm tasked with building a cloud for a startup with ambitious goals. The firm has capex, in fact, wants to spend money on capital expenditure because it's tangible and easily funded. In running the numbers, I've found that we may be able to build and operate our datacenter locally for less money over three years than to simply buy public cloud resources.
We'll also have the additional benefits of stimulating a local economy (in rural Northern Illinois) that needs it badly, while tapping into a large fiber-optic network that was just laid down throughout the region as part of an $85 million government program. We have all the brick-and-mortar, construction expertise, and bandwidth we need here. We can provide jobs every step of the way, including once we're up and running.
I'm evaluating a whole bunch (for lack of a more elegant, precise term) of alternatives for PaaS (to create the software that will drive the datacenter), and for a private-cloud infrastructure that makes performance and economic sense.
We can do this with just a single rack of servers to start - I'm not talking about recreating an NSA or Google site. But we can blast enough cyberkinetic energy into the tubes of the Internet to serve a very ambitious business plan with our own private cloud. If we need more, we have plenty of room, bandwidth (as I said already), and electricity.
And if we run short of processing at any step of the way, I'll just give Jeff Bezos a call and see if he has some extra-large instances to sell to me on an occasional basis.
"ReadyTalk is an audio and web video conferencing provider. We've really come to embrace WebRTC as the platform for our future of technology," explained Dan Cunningham, CTO of ReadyTalk, in this SYS-CON.tv interview at WebRTC Summit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 7, 2016 11:30 AM EST Reads: 605
Dec. 7, 2016 11:15 AM EST Reads: 2,279
Dec. 7, 2016 11:01 AM EST
Dec. 7, 2016 11:00 AM EST Reads: 959
Dec. 7, 2016 11:00 AM EST Reads: 968
Dec. 7, 2016 10:45 AM EST Reads: 258
Dec. 7, 2016 10:30 AM EST Reads: 1,478
Dec. 7, 2016 10:30 AM EST Reads: 1,129
Dec. 7, 2016 10:30 AM EST Reads: 1,700
Dec. 7, 2016 10:30 AM EST Reads: 915
Dec. 7, 2016 10:03 AM EST Reads: 236
Dec. 7, 2016 10:00 AM EST Reads: 386
Without a clear strategy for cost control and an architecture designed with cloud services in mind, costs and operational performance can quickly get out of control. To avoid multiple architectural redesigns requires extensive thought and planning. Boundary (now part of BMC) launched a new public-facing multi-tenant high resolution monitoring service on Amazon AWS two years ago, facing challenges and learning best practices in the early days of the new service. In his session at 19th Cloud Exp...
Dec. 7, 2016 09:30 AM EST Reads: 841
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, showed how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningful f...
Dec. 7, 2016 09:15 AM EST Reads: 4,606
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, sha...
Dec. 7, 2016 09:15 AM EST Reads: 1,093