|By Jason Bloomberg||
|April 27, 2013 02:00 PM EDT||
In my role as a globetrotting Cloud consultant, I continue to be amazed at how many executives, both in IT and in the lines of business, still favor Private Clouds over Public. These managers are perfectly happy to pour money into newfangled data centers (sorry, “Private Clouds”), even though Amazon Web Services (AWS) and its brethren are reinventing the entire world of IT. Their reason? Sometimes they believe Private Clouds will save them money over the Public Cloud option. No such luck: Private Clouds are dreadfully expensive to build, staff, and manage, while Public Cloud services continue to fall in price. Others point to security as the problem. No again. OK, maybe Private Clouds will give us sufficient elasticity? Probably not. Go through all the arguments, however, and they’re still dead set on building that Private Cloud. What gives? The true reason for this stubbornness, of course, is the battle over control.
Thinking Like a Control Freak
IT executives in particular have always been control freaks. Our IT environments have been filled with fragile, flaky gear for so long that we figure the only way to run the IT shop is to control everything, grudgingly doling out bits of functionality and information to business users, but only when they ask nicely.
But this old mainframe reality has been fading for years now. The move to client/server to n-tier to the Internet and now to the Cloud are all exercises in increasingly distributed computing, with special emphasis on the distributed. As in distributed control.
The technology powers that be in the enterprise have been fighting this trend kicking and screaming, of course. But they’ve been fighting a losing battle. We saw the tide turn in the first-generation SOA days of the 2000s, when the IT establishment invested tried to implement SOA by buying ESBs, centralized pieces of middleware that purported to run the organization. But too many enterprises ended up with multiple ESBs and other pieces of middleware, since of course every manager in every department silo needs their own, because they all crave control. So the doomed SOA effort became a futile exercise in middleware-for-your-middleware, as the desired agility benefit sank beneath waves of rats-nest complexity.
What’s really going on here? Why do executives crave control so badly? Two reasons: risk mitigation and differentiation. If that piece of technology is outside your control, then perhaps bad things will happen: security breaches, regulatory compliance violations, or performance issues, to name the scariest. The problem is, maintaining control doesn’t necessary reduce such risks. But if you’re responsible for managing the risks, then the natural reaction is to crave control.
Managers also believe that whatever it is they’re doing in their silo is special and different in some way. So there’s no way they can leverage that shared piece of middleware or shared SOA-based Services or multitenant Cloud. If they did, they wouldn’t be special any more. Having a differentiated offering is essential to any viable market strategy, after all. So clearly my technology has to be different from your technology!
Chaos vs. Control
The Cloud, as you might expect, shakes up both these considerations, because the Cloud separates responsibility from control in ways that we’ve never seen before. Every manager knows that these two priorities often go hand in hand, and under normal circumstances, we prefer them to go together, because the last thing we want is responsibility without control: the recipe for becoming the scapegoat, after all. With the Cloud, however, we can maintain control while delegating responsibility to the Cloud Service Provider (CSP). The CSP is responsible for ensuring the operational environment is working properly, including the automated management and user-driven provisioning and configuration that differentiate Cloud Computing from virtualized hosting. However, the CSP has delegated control over each customer environment to that customer.
By turning around this control vs. responsibility equation, we’ve placed the CSP into the scapegoat position. As long as we have an iron-clad Service-Level Agreement with our CSP, we can trust them to take responsibility for our operational environments, and if anything goes wrong, we can hold them responsible. But the control over those environments remains with us, the customer. Once enterprise executives realize this new world order, they will run as fast as they can away from building Private Clouds. After all, if you can maintain control while delegating responsibility, why would you ever want responsibility? Responsibility gets people fired, after all.
Shifting responsibility to the CSP also helps to resolve the regulatory compliance roadblock that so many executives point to as the reason to select Private over Public Cloud. A combination of a properly responsible CSP combined with a sufficiently detailed SLA can go a long way toward indemnifying organizations against compliance breach risks. Remember, regulations rarely if ever specify how you must comply, only that you must. It’s up to you (and your lawyers) to decide on the how. As long as you’re diligent, conscientious, and follow established best practice, you’ve mitigated the bulk of your noncompliance risk. The CSPs are chomping at the bit to take this responsibility, so the smart risk mitigation strategy is shifting toward the Public Cloud.
The Price of Differentiation
The second threat to centralized control of IT is the business driver toward differentiation. Whatever our department or business is doing is special and different, and thus our infrastructure as well as our application environment must be unique as well. This principle is always true up to a point, which is why executives love to cling to it like a floating log in a vast sea of change. But just where that point falls continues to shift, and has shifted further than many people realize.
No enterprise would dream of calling a computer chip company and asking them to fabricate a custom processor for general business needs. What about a server? Unlikely, but perhaps. What about your core business applications, like finance, human resources, or customer relationship management (CRM)? Somewhat more likely. How about applications that provide capabilities that differentiate you in the marketplace? OK, now we’re talking.
In other words, virtually no enterprise has any rational motivation to specify custom infrastructure. Today’s Infrastructure-as-a-Service (IaaS) will do, especially considering how many configuration choices are available today: processor speed, operating system (as long as you want Windows or a flavor of Linux), memory, storage, and network are all user configurable and provisionable. Furthermore, there’s no reason to customize your dev, test, or deployment environments, so might as well use a Platform-as-a-Service (PaaS) offering.
But what about the applications? For non-strategic apps like CRM, might as well use Software-as-a-Service (SaaS) like Salesforce. No executives in their right mind would say that their customer relationship needs are so unique that they should code their own CRM system. So, what about those strategic apps, the ones that offer our differentiated capabilities or information to our customers? If an existing SaaS app won’t do, well, that’s what PaaS and IaaS are for: building and hosting our custom apps for us, respectively.
Still not convinced? Consider the competitive risk: the risk of spending too much money on unnecessary capabilities. While your competition is leveraging the Cloud, focusing their efforts on their true strategic differentiation in the market and saving buckets of dough everywhere else, you’re busy pouring cash into building yet another widget that might as well be the same widget you can get much more cheaply in the Cloud. If doing something unique and different doesn’t help the bottom line, then you’re simply wasting money. The asteroid is almost here. Which would you rather be, a dinosaur or a mammal?
The ZapThink Take
Outsourcing commodity capabilities to the low-cost provider while focusing your strategic value-add on customized offerings is an oft-repeated pattern in the world of business, but it hasn’t really taken hold in the world of IT until the rise of Cloud Computing. The reason it’s taken so long for the techies is because we’ve never been able to separate control and responsibility in the past as well as we can today. Before the Cloud, if we wanted to outsource one, then the other went along for the ride. Any enterprise that outsourced their entire IT operation went down this road. Sure, your technology becomes somebody else’s responsibility, but you end up giving up control as well.
Perhaps the greatest challenge with maintaining such control with the Cloud is that it raises the stakes on governance, leading to what we call next-generation governance in our ZapThink 2020 Poster as well as my new book, The Agile Architecture Revolution. The Cloud’s automated self-service represents powerful tools in the hands of people across our organization. Without a proactive, automated approach to governance, we risk running off the rails. Such issues are endemic in today’s technology environments: from Bring-Your-Own-Device (BYOD) challenges to SOA governance to rogue Clouds, we must learn how to maintain control while maintaining the agility benefit such powerful technology dangles in front of us. But until we learn to delegate responsibility for the underlying technology to Public Cloud Providers, we’ll never be able to maintain control cost-effectively while maintaining our competitiveness.
Image source: Diego David Garcia
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
May. 2, 2016 11:00 PM EDT Reads: 975
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, will show how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningfu...
May. 2, 2016 08:30 PM EDT Reads: 680
If there is anything we have learned by now, is that every business paves their own unique path for releasing software- every pipeline, implementation and practices are a bit different, and DevOps comes in all shapes and sizes. Software delivery practices are often comprised of set of several complementing (or even competing) methodologies – such as leveraging Agile, DevOps and even a mix of ITIL, to create the combination that’s most suitable for your organization and that maximize your busines...
May. 2, 2016 07:45 PM EDT Reads: 1,877
In his session at @ThingsExpo, Chris Klein, CEO and Co-founder of Rachio, will discuss next generation communities that are using IoT to create more sustainable, intelligent communities. One example is Sterling Ranch, a 10,000 home development that – with the help of Siemens – will integrate IoT technology into the community to provide residents with energy and water savings as well as intelligent security. Everything from stop lights to sprinkler systems to building infrastructures will run ef...
May. 2, 2016 07:00 PM EDT Reads: 1,043
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus inter...
May. 2, 2016 06:30 PM EDT Reads: 1,293
In the world of DevOps there are ‘known good practices’ – aka ‘patterns’ – and ‘known bad practices’ – aka ‘anti-patterns.' Many of these patterns and anti-patterns have been developed from real world experience, especially by the early adopters of DevOps theory; but many are more feasible in theory than in practice, especially for more recent entrants to the DevOps scene. In this power panel at @DevOpsSummit at 18th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists will dis...
May. 2, 2016 06:00 PM EDT Reads: 709
Up until last year, enterprises that were looking into cloud services usually undertook a long-term pilot with one of the large cloud providers, running test and dev workloads in the cloud. With cloud’s transition to mainstream adoption in 2015, and with enterprises migrating more and more workloads into the cloud and in between public and private environments, the single-provider approach must be revisited. In his session at 18th Cloud Expo, Yoav Mor, multi-cloud solution evangelist at Cloudy...
May. 2, 2016 05:30 PM EDT Reads: 1,559
Artificial Intelligence has the potential to massively disrupt IoT. In his session at 18th Cloud Expo, AJ Abdallat, CEO of Beyond AI, will discuss what the five main drivers are in Artificial Intelligence that could shape the future of the Internet of Things. AJ Abdallat is CEO of Beyond AI. He has over 20 years of management experience in the fields of artificial intelligence, sensors, instruments, devices and software for telecommunications, life sciences, environmental monitoring, process...
May. 2, 2016 05:15 PM EDT Reads: 983
Increasing IoT connectivity is forcing enterprises to find elegant solutions to organize and visualize all incoming data from these connected devices with re-configurable dashboard widgets to effectively allow rapid decision-making for everything from immediate actions in tactical situations to strategic analysis and reporting. In his session at 18th Cloud Expo, Shikhir Singh, Senior Developer Relations Manager at Sencha, will discuss how to create HTML5 dashboards that interact with IoT devic...
May. 2, 2016 05:00 PM EDT Reads: 1,092
See storage differently! Storage performance problems have only gotten worse and harder to solve as applications have become largely virtualized and moved to a cloud-based infrastructure. Storage performance in a virtualized environment is not just about IOPS, it is about how well that potential performance is guaranteed to individual VMs for these apps as the number of VMs keep going up real time. In his session at 18th Cloud Expo, Dhiraj Sehgal, in product and marketing at Tintri, will discu...
May. 2, 2016 04:00 PM EDT Reads: 900
So, you bought into the current machine learning craze and went on to collect millions/billions of records from this promising new data source. Now, what do you do with them? Too often, the abundance of data quickly turns into an abundance of problems. How do you extract that "magic essence" from your data without falling into the common pitfalls? In her session at @ThingsExpo, Natalia Ponomareva, Software Engineer at Google, will provide tips on how to be successful in large scale machine lear...
May. 2, 2016 03:45 PM EDT Reads: 1,340
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
May. 2, 2016 03:30 PM EDT Reads: 1,066
The IoTs will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm and share the must-have mindsets for removing complexity from the development proc...
May. 2, 2016 03:00 PM EDT Reads: 352
SYS-CON Events announced today that Ericsson has been named “Gold Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. Ericsson is a world leader in the rapidly changing environment of communications technology – providing equipment, software and services to enable transformation through mobility. Some 40 percent of global mobile traffic runs through networks we have supplied. More than 1 billion subscribers around the world re...
May. 2, 2016 02:45 PM EDT Reads: 1,084