|By Michael Bushong||
|April 24, 2014 11:30 AM EDT||
From a cost perspective, the networking dialogue is dominated by CapEx. Acquisition costs for new networking gear have historically been tied to hardware, and despite the relatively recent rise of bare metal switching, networking hardware remains a high-stakes business. But SDN is changing this dynamic in potentially significant ways.
The first point to clarify when talking about CapEx is that CapEx does not necessarily mean hardware (at least not the way that most people mean). While there is a strict financial definition for CapEx, in the networking industry it has become shorthand for Procurement Costs. Because networking solutions have been predominantly monetized through hardware, we associate procurement costs with hardware, but this is changing.
The fact that the ’S’ in SDN stands for software is reason enough for people to look beyond the chassis. But the reality is that while vendors have monetized the hardware, the value has been increasingly moving to the software side for more than a decade. So long as everyone was selling hardware, it didn’t really matter that much whether the cost was tied to the hardware or the software, so we have been a little bit lazy collectively in determining a deliberate pricing mix.
More recently, however, there have been additional solutions that are offered entirely through software. With virtual networking devices, for example, there is no physical hardware (unless you count the servers and the network that connects the servers). A common sales tactic for these types of solutions is to point out how expensive physical solutions are. Why pay for all that sheet metal when you can get the same functionality in a virtual form factor? Of course, you are not really paying for the sheet metal; your check also pays for the software and all the features that go into that sheet metal. But the argument is pretty compelling.
The point here is that the only thing that really matters is how much you pay for the whole solution. Whether the price is affixed to hardware or software is an accounting detail – important for some people, but not really the most important thing for the majority of buyers. Rather than calling it CapEx, we ought to be referring more broadly to procurement or acquisition costs. All in, Solution A costs X dollars to bring in house, and Solution B costs Y dollars.
This would certainly simplify the conversation some. But even then, it isn’t all about procurement costs anymore either.
Depending on the solution, the procurement costs account for roughly one-third of the total cost of ownership. The remaining two-thirds of the cost is ongoing operating expense (power, cooling, space, management, support, and so on). The models here for most solutions start to get pretty squishy. While we can fairly formulaically determine things like power, space, and support, when it comes to estimating the cost of managing a device, the models are so dependent on uncontrollable things that they border on useless. And even when the models are sound, most companies have not sufficiently instrumented their network operations to really know what they are spending.
But just because it is difficult to model OpEx does not mean that network teams should ignore it.
If there is one thing that the gaming industry has taught us, it is that there are all kinds of creative ways to separate someone from their money. In the early days of video games, 100% of the cost was procurement cost. After you bought the install media, you had paid everything you were ever going to pay. Before long, some of the more popular games figured out that they could lower initial costs (make the barrier to entry lower) and then charge for ongoing use through subscriptions.
As the networking world adjusts the pricing mix – associating more of the cost with the software – we should expect that charge models will mirror what we have seen on the consumer side. It is not a big stretch (and in fact already happening) to see massive up-front hardware costs replaced with more palatable hardware pricing combined with either higher software or potentially support costs. This has the dual benefit of making it easier for customers to select a vendor, and creating annuities for said vendor.
But the evolution of game pricing models did not end with subscriptions.
For anyone who has gotten sucked into the hell that is Candy Crush, you are already well aware of in-app purchases. The initial game is free, but if you want to get a special advantage or unlock a level, you can make an in-app purchase. They have cleverly priced the in-app purchases to feel like you are hardly spending anything. It’s less than a dollar. I should just go ahead and get that spotted donut thingy! Of course, by the time you add up all those just a dollar moments, you end up paying far more than you ever would have up front.
The magic of this type of pricing is that most of this is not really known up front. When you first get Candy Crush, you don’t really think you are going to buy the special extras. And Candy Crush doesn’t tell you that the levels get progressively harder to the point that they are nigh impossible without a little extra help.
Before you write this off as not applicable to networking, consider a few points.
First, despite the huge open source push, there are still a lot of companies pursuing commercial grade versions of the otherwise free software. Sure, you might buy into the open source controller, but if you need the networking version of the spotted donut thing, what do you do? This is essentially the networking equivalent of the in-app purchase. Call it the in-arch purchase. Once you buy into a particular architecture, the switching costs are prohibitively high. If you have to pay more for the commercial software, can you really say no?
Second, some of the tiered pricing models that are taking root make it more difficult to accurately model ongoing license costs. If you are not thinking about how the costs will scale with the number of ports, users, VMs, or whatever, you might find out down the road that your solution is contributing more ongoing costs than anticipated. For example, buying one VM from Amazon might seem easy enough, but what if you need thousands? It doesn’t stay cheap forever.
Maybe the in-arch costs are just extra features or capabilities. Or ongoing support and services. Whatever the source, these types of costs contribute to the ongoing operating expenses. And because the primary purchasing criterion is CapEx (procurement costs), burying some of these costs a little later in the product lifecycle and making them a bit smaller in magnitude (but larger in volume) will be attractive.
The punch line here is that we are on the cusp of a change in monetization strategies. You might think that pricing and costs will be transparent, but has the networking community given us a real reason to believe that to date? If you think so, consider this: why do buyers celebrate 50% discounts? It’s because pricing is ridiculously obfuscated in this industry. Until we all start expecting more, I just don’t know why this would change.
Along those lines, my colleague Bill Koss posted some facts about Plexxi costs. In the interest of transparency, it’s worth taking a look here.
[Today’s fun fact: The wettest spot in the world is located on the island of Kauai. Mt. Waialeale consistently records rainfall at the rate of nearly 500 inches per year. That’s enough so drown 7 6-foot-tall men standing on each other’s heads.]
"When you think about the data center today, there's constant evolution, The evolution of the data center and the needs of the consumer of technology change, and they change constantly," stated Matt Kalmenson, VP of Sales, Service and Cloud Providers at Veeam Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 25, 2016 10:45 PM EDT Reads: 1,295
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.
Jul. 25, 2016 10:15 PM EDT Reads: 1,663
Continuous testing helps bridge the gap between developing quickly and maintaining high quality products. But to implement continuous testing, CTOs must take a strategic approach to building a testing infrastructure and toolset that empowers their team to move fast. Download our guide to laying the groundwork for a scalable continuous testing strategy.
Jul. 25, 2016 10:15 PM EDT Reads: 1,917
Let’s face it, embracing new storage technologies, capabilities and upgrading to new hardware often adds complexity and increases costs. In his session at 18th Cloud Expo, Seth Oxenhorn, Vice President of Business Development & Alliances at FalconStor, discussed how a truly heterogeneous software-defined storage approach can add value to legacy platforms and heterogeneous environments. The result reduces complexity, significantly lowers cost, and provides IT organizations with improved efficienc...
Jul. 25, 2016 10:00 PM EDT Reads: 1,939
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
Jul. 25, 2016 10:00 PM EDT Reads: 2,498
The IoT is changing the way enterprises conduct business. In his session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, discussed how businesses can gain an edge over competitors by empowering consumers to take control through IoT. He cited examples such as a Washington, D.C.-based sports club that leveraged IoT and the cloud to develop a comprehensive booking system. He also highlighted how IoT can revitalize and restore outdated business models, making them profitable ...
Jul. 25, 2016 08:30 PM EDT Reads: 1,931
StackIQ has announced the release of Stacki 3.2. Stacki is an easy-to-use Linux server provisioning tool. Stacki 3.2 delivers new capabilities that simplify the automation and integration of site-specific requirements. StackIQ is the commercial entity behind this open source bare metal provisioning tool. Since the release of Stacki in June of 2015, the Stacki core team has been focused on making the Community Edition meet the needs of members of the community, adding features and value, while ...
Jul. 25, 2016 08:15 PM EDT Reads: 231
The cloud competition for database hosts is fierce. How do you evaluate a cloud provider for your database platform? In his session at 18th Cloud Expo, Chris Presley, a Solutions Architect at Pythian, gave users a checklist of considerations when choosing a provider. Chris Presley is a Solutions Architect at Pythian. He loves order – making him a premier Microsoft SQL Server expert. Not only has he programmed and administered SQL Server, but he has also shared his expertise and passion with b...
Jul. 25, 2016 08:00 PM EDT Reads: 1,915
With 15% of enterprises adopting a hybrid IT strategy, you need to set a plan to integrate hybrid cloud throughout your infrastructure. In his session at 18th Cloud Expo, Steven Dreher, Director of Solutions Architecture at Green House Data, discussed how to plan for shifting resource requirements, overcome challenges, and implement hybrid IT alongside your existing data center assets. Highlights included anticipating workload, cost and resource calculations, integrating services on both sides...
Jul. 25, 2016 08:00 PM EDT Reads: 1,965
Big Data engines are powering a lot of service businesses right now. Data is collected from users from wearable technologies, web behaviors, purchase behavior as well as several arbitrary data points we’d never think of. The demand for faster and bigger engines to crunch and serve up the data to services is growing exponentially. You see a LOT of correlation between “Cloud” and “Big Data” but on Big Data and “Hybrid,” where hybrid hosting is the sanest approach to the Big Data Infrastructure pro...
Jul. 25, 2016 07:30 PM EDT Reads: 1,881
We all know the latest numbers: Gartner, Inc. forecasts that 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from last year, and will reach 20.8 billion by 2020. We're rapidly approaching a data production of 40 zettabytes a day – more than we can every physically store, and exabytes and yottabytes are just around the corner. For many that’s a good sign, as data has been proven to equal money – IF it’s ingested, integrated, and analyzed fast enough. Without real-ti...
Jul. 25, 2016 07:15 PM EDT Reads: 1,009
In his session at 18th Cloud Expo, Sagi Brody, Chief Technology Officer at Webair Internet Development Inc., and Logan Best, Infrastructure & Network Engineer at Webair, focused on real world deployments of DDoS mitigation strategies in every layer of the network. He gave an overview of methods to prevent these attacks and best practices on how to provide protection in complex cloud platforms. He also outlined what we have found in our experience managing and running thousands of Linux and Unix ...
Jul. 25, 2016 07:15 PM EDT Reads: 1,746
In his session at @DevOpsSummit at 19th Cloud Expo, Yoseph Reuveni, Director of Software Engineering at Jet.com, will discuss Jet.com's journey into containerizing Microsoft-based technologies like C# and F# into Docker. He will talk about lessons learned and challenges faced, the Mono framework tryout and how they deployed everything into Azure cloud. Yoseph Reuveni is a technology leader with unique experience developing and running high throughput (over 1M tps) distributed systems with extre...
Jul. 25, 2016 07:15 PM EDT Reads: 2,087
"We are a well-established player in the application life cycle management market and we also have a very strong version control product," stated Flint Brenton, CEO of CollabNet,, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 25, 2016 07:15 PM EDT Reads: 1,794
"Operations is sort of the maturation of cloud utilization and the move to the cloud," explained Steve Anderson, Product Manager for BMC’s Cloud Lifecycle Management, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 25, 2016 07:00 PM EDT Reads: 1,881