Welcome!

Blog Feed Post

Commodity Network Fabrics

What role does the concept of a “network fabric” play in the march towards commoditization of networking?  Well, let’s discuss!

The Whole Shebang

There can be no doubt that an organization’s relationship to networking is to the aggregate thing they call “the network.”  When there are issues, non-network folks say wonderfully vague things like “The network is dropping packets!” or “I can’t login… must be the network.”  This intuition, to think about the network as a whole, rather than as a collection of systems, is right:  Collectively, the network is supposed to produce desirable aggregate behavior.

This is an important clue as to how networking will evolve in the future.  SDN is a step in this direction.  Intelligent software will undoubtedly coordinate the actions of the underlying constituent systems, on behalf of an operator or an application, to achieve some policy goals.  This software need not exist solely in the form of a network controller.  Indeed, here at Plexxi, our switches can coordinate on their own to achieve aggregate behavior.  This is why you can stand up a Plexxi network, and pass traffic, without the need for a centralized controller.

A network fabric should have the goal of managing network workloads according to a higher-level policy.  However, many fabrics do not do this.  They may have some desirable fabric features, but for edge policies operators must still log into individual devices to achieve their goals.  This, of course, is the fundamental problem of networking that SDN hopes to solve:  Let intelligent software perform these menial tasks, and let the organization, or the operator, express network-wide policy to the software.

The Value of the Network

What is the value of the network?  Fundamentally, the network has one feature that matters: paths.  The job the network, first and foremost, is to facilitate the movement of data between it’s edges.  The more paths a network has, the better.  We even see this in leaf-and-bufferspine designs.

Administrative, control, voice, video, bulk, and garbage are just some of the workload types requiring different treatment in the network.  When you have fewer paths in the network, it becomes increasingly difficult to manage workload conflict that arises when multiple types of traffic converge on an egress interface.  Quality-of-Service has always represented a sort of white flag of surrender before conflict even occurs, and let’s be honest, it’s been an absolute nightmare to manage on the ground.  Aggregate flow characteristics change throughout the day (burstiness, packet size distribution, differing workload types), making static policies difficult to implement.  The best you can hope for is a policy that represents the lowest-common denominator compromise.

Even when you have multiple paths in the network, it’s virtually impossible to manage and move differing workload types.  How frustrating it has been that spanning-tree cut the usable bandwidth down drastically in the data center.  Even if we could use it, how to move only some workloads?  Imagine doing this when you have multiple types of workloads just within HTTP!  Transferring files, web traffic, API calls for automation systems… all in the same encapsulation.

QoS is obviously the product of legacy network thinking:  Fewer paths and indiscriminate workload placement, resulting from the erroneous belief that universal reachability for packets is the primary goal of the network.  Build just enough paths to be redundant, put the routes in… and hope for the best.  Are we done being amazed that we can make packets go yet?  Can’t we do better than making a sequel to “The Hangover” because we can ping?  Aren’t we tired of failing to deal with the complexity of networking as a whole?  Then let’s stop using legacy stuff to accomplish our goals.

Network Commodifabricization

The value of the network goes up as more paths are added.  However, the old way of workload placement in the network, as well as the old way of handling workload conflict, just isn’t going to be manageable by hand.  Adding value to the network should be as simple as adding paths, and adding paths should actually be simple both physically and logically.  A commodity network means lots of paths, which are the primary value of the network to begin with.  It also means intelligent software that manages the many types of workloads on the network by distributing them across those paths.  That same software will present an intuitive policy interface to humans who just want “the network” to work.

Where does that leave the current trend of some companies seeking to commoditize on legacy networking?  Well, like cloud, it would seem that many folks are banking on the idea that IT is done evolving.  Including networking!  Obviously, this is not the case.  What we are experiencing right now is the “big crunch” of IT.  If the mainframe represented some primordial IT state that exploded into the constituent pieces of the IT universe, like the big bang of tech, then the data center of the future represents the big crunch of these pieces.  Lots of intermediate layers will disappear, from the guest OS of a VM, to maybe even the IP protocol!  Will linux-based switches and routers with a subset of legacy network features really have a role here?  Perhaps in the short-term, but not for long.

Intuitive network fabrics are the true start down the path of commoditization, making the real value of the network directly and easily manageable.

[Fun fact:  One time, I drove a bulldozer into a pond.  People get really mad when you do that.  Also, it makes the bulldozer inoperable.  Hmmm... if only there had been a "path" around the pond.]

 

The post Commodity Network Fabrics appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

Latest Stories
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
"Loom is applying artificial intelligence and machine learning into the entire log analysis process, from start to finish and at the end you will get a human touch,” explained Sabo Taylor Diab, Vice President, Marketing at Loom Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists loo...
"Tintri focuses on the Ops side of the DevOps, which basically is pushing more and more of the accessibility of the infrastructure to the developers and trying to get behind the scenes," explained Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
In the world of DevOps there are ‘known good practices’ – aka ‘patterns’ – and ‘known bad practices’ – aka ‘anti-patterns.' Many of these patterns and anti-patterns have been developed from real world experience, especially by the early adopters of DevOps theory; but many are more feasible in theory than in practice, especially for more recent entrants to the DevOps scene. In this power panel at @DevOpsSummit at 18th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists discussed...
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, highlighted the current challenges of these transformative technologies and shared strategies for preparing your organization for these changes. This “view from the top” outlined the latest trends and developments i...
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
"We focus on composable infrastructure. Composable infrastructure has been named by companies like Gartner as the evolution of the IT infrastructure where everything is now driven by software," explained Bruno Andrade, CEO and Founder of HTBase, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing bes...
Artificial intelligence, machine learning, neural networks. We’re in the midst of a wave of excitement around AI such as hasn’t been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. Will this time be different? Most likely. Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Furthermore, the funding and research going into AI now comes from a wide range of com...
In this presentation, Striim CTO and founder Steve Wilkes will discuss practical strategies for counteracting fraud and cyberattacks by leveraging real-time streaming analytics. In his session at @ThingsExpo, Steve Wilkes, Founder and Chief Technology Officer at Striim, will provide a detailed look into leveraging streaming data management to correlate events in real time, and identify potential breaches across IoT and non-IoT systems throughout the enterprise. Strategies for processing massive ...