Welcome!

Related Topics: SDN Journal, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo, @BigDataExpo

SDN Journal: Article

Beyond SDN: Creating Focused and Useable Solutions

The real customers and end users want practical and usable solutions, not definitions

Software Defined Networking (SDN) has become a famous paradigm and also the bandwagon in the networking industry today. SDN is primarily considered to be a methodology or approach to solving some of the wider-known problems in the enterprise and service provider networking space. It's also a tool to create some exciting new features today. The term "Software Defined Networking" provides a green-field opportunity for vendors to define, promote and customize it in their own way. End users don't care so much about the definition; they are more concerned about its contribution in optimizing and solving real problems.

The initial protocol that is considered to be a precursor to SDN is "OpenFlow." Open Networking Foundation (ONF) defines SDN as a new approach to networking, whereby, network control is decoupled from the data-forwarding function and is directly programmable. OpenFlow allows the traditional layer 2 switches to examine headers in the packet/frame and make forwarding decisions. OpenFlow-supported switches examine the packet headers through the transport layers and can match more than 13 fields that span across layer 2 to layer 4.

How Exactly Is It Going to Be Useful?
There are some interesting use cases defined by various vendors that utilize the IP and TCP header look-up to make forwarding decisions. Even though these use cases are not fully established, they may be useful to perform traffic redirection and traffic engineering by merely using switches. Some practical uses of traffic engineering would be to isolate the malicious traffic at the switch level for further analysis and containment. Another example would be the ability to divert traffic through multiple ISP connections based on applications and specific computers (users). Many vendors are focusing on getting these use cases established by creating controllers and switches. Controllers push the rules onto the switches. Switches perform the packet processing, rule lookup and makes forwarding decisions. OpenFlow controllers and switches are considered to be the two main pieces of SDN by many vendors. Other software is currently being developed and promoted under the SDN umbrella such as Orchestration/Automation software.

Why Do We Need Orchestration/Automation Software?
Orchestration/automation software is primarily considered to be a component that sits on top of the controller and uses the controller's northbound APIs to execute sets of tasks in sequence based on events and monitoring. Usually these tasks are performed by scripts that run on either a time-bound or situation-bound way manually set in place by system administrators. As an example, scripts could be a weakened configuration script, a flash crowd-specific network, server configuration script, etc. It provides the ability to perform scenario-specific, time-specific, or business-policy-specific infrastructure setup and configuration. Orchestration software brings these scripts under a single umbrella of SDN and masks the error-prone programming needs from the system administrators to provide a user-friendly and easy-to-configure, easy-to-monitor graphical user interface.

One of the most important uses of Orchestration/Automation software is in cloud computing. The cloud is in essence a data center that runs services on top of physical servers directly or on virtual machines that share a single physical server and provides a user-friendly interface to manage the services, the virtual machines (VM), the servers and the whole infrastructure. The main idea behind consolidating the VMs on a single physical server is to maximize the utilization of the hardware resources that are invested and minimize the operational expenses (OPEX) such as energy costs by running the fewest possible physical servers for a given load. As loads increase, more VMs require enabling to balance the load and provide optimum service. Hardware virtualization software (hypervisors) makes the process of preserving a running operating system as a snapshot or image easy and automatic. When a snapshot is created as a virtual machine, it's important to get the underlying networking also reconfigured automatically. This is where OpenFlow comes into play to enable network virtualization.

Here's how it works. When the VM is booted up and sends the first Ethernet frame outbound, the switch captures it and sends layer 2 and layer 4 header information to the controller and checks where to forward the packets. Controller creates the dynamic "vlan-like" port grouping based on predefined policies using MAC addresses or IP addresses. Without any administrative intervention, the newly created VM is already part of the existing network and is part of the pre-configured load balancer server pool. This practical and exciting approach makes good use of the SDN. The automation is generally done through the hypervisor or management software that runs above the hypervisor. While this automation seems magical, there are some important points to consider.

What's the Catch?
Like expert magicians, SDN vendors misdirect the users about the features and opportunities of control and data plane separation while not revealing some important facts. When lots of promotional and inaccurate information about SDN prevails in the market, we should also learn to look behind the curtains to fully understand the price that is paid for the new features. When we look closely, the price of enabling OpenFlow is obvious; it's performance. Traditional switches are meant to look up the layer 2 fixed length headers. Conversely, OpenFlow switches look up variable length headers such as IP and TCP. While the effort to examine length-delimited lookup and parsing is obvious, there are some good readings that detail the performance penalties of handing variable length headers compared to fixed-length headers.

Although OpenFlow switches open up an exciting new approach and bring a huge momentum to the networking industry, the illusion of them replacing all the layer 2 switches will not hold up very well when you actually put them to test and compare the results. OpenFlow should complement the existing infrastructure and should not attempt to replace traditional switches since OpenFlow switches try to solve a different set of problems. Pricing what we pay to automatically detect the newly created VM or newly created application session is actually impacting the packet/frame forwarding performance immensely. While OpenFlow is still useful as traffic engineering and as a flow management tool, it should not be considered a replacement for a layer 2 switch. It's not just based on the OpenFlow protocol maturity at this point; it's based on its design itself.

Hidden Gem
One of the important aspects of the SDN that does not get much traction on the specifics is northbound APIs. While ‘application-oriented' and ‘application-defined' software and networking product promotions have been swamping the industry, this is really about engineering application traffic based on TCP port numbers. But correctly implemented northbound APIs can bridge the gap between the application and networking worlds. Industry brilliance should be applied to solve the real age-old problem: TCP. Applications utilize TCP. Application developers consider networking as a one big pipe of unlimited bandwidth and speed of light connectivity. Applications have limited visibility into the underlying networking or server infrastructure. In the SDN world, controller vendors are pondering and developing northbound APIs. Most controller developers are considering these APIs only as a CLI replacement. They are also viewing it as a southbound interface to another network automation or management software.

Let the Application Be the Controller
Think of the gravity hydro-dams. When counties around the state are requesting more water for irrigation, what happens if the dam's controller decides to honor every request for the needed amount? Should it open the water-gate to its fullest to serve all the required quantity without considering how much the distribution pipes can handle? Although most people will not think of doing this, this is exactly what is happening in the software world today.

When the application receives the incoming requests, it assumes the network has unlimited capacity and light-speed connectivity to the one making the request. Applications start creating packets by spending CPU, memory and disk resources. Later, the network optimization or QoS device finds out that the links are overused and decides to drop the packets to inform the applications to slow down. All of the resources consumed were not only going to waste, it also created more congestion on the network. Instead of using ancient smoke-signaling approaches like packet drops to inform the applications about the network congestion, SDN vendors should build more robust northbound APIs to provide more network visibility to the applications. It will be a paradigm shift in the way applications are developed. It will address the problem at its source. The promise relies on the simplicity and standardization of the northbound APIs.

Although the northbound APIs are not well defined and left for vendors to implement their own sets of rules, the power to make the SDN succeed lies in the northbound APIs. It is the real disruption in the industry not the data and control plane separation.

Northbound APIs for Policy Plane
As the controller's northbound API is to the underlying infrastructure, the needs for northbound APIs for the policy plane are also growing. Policies change all the time to align with business goals as they drive the infrastructure both directly and indirectly. When the policy plane also exposes the APIs for applications to consume the priorities and service level agreements (SLA), the same occurs between the forwarding plane and control plane today on the networking side.

Northbound APIs should allow the application to query the system, network, and server infrastructure to optimize the network globally. It should also be able to interact with the policy layer to get the priorities and SLA before committing to any resources. This will exceed the end user's investment on applications and networking infrastructure while avoiding shifting problems between each other and truly begin to collaborate and complement one another.

The real customers and end users want practical and usable solutions, not definitions. We should think beyond defining the jargon and start creating focused and useable solutions.

References

  1. http://www.cs.cmu.edu/~srini/15-744/F02/readings/McK97.html#3needswitch
  2. https://www.opennetworking.org/about/onf-overview

More Stories By Karthikeyan Subramaniam

Karthikeyan Subramaniam serves as the Chief Software Architect and Architect of the Company’s Software Defined Networking Platform. He led the development of the company’s SDN and Cloud Computing Platform work for Verizon, alongside Hewlett Packard, Intel Corp, the industry leading SDN platform unveiled at the Open Networking Summit, the world’s largest SDN summit. He has created and developed the Company’s platforms in Software Defined Networking, and Interoperability. He was at Intel Technology India, in Intel Server Systems and Intelligent Platform Management. He was at Cisco Offshore Development Center in Cisco’s Enterprises Management Business Unit (EMBU) in Cisco’s Voice Systems, Voice Gateways and Gatekeepers.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, will describe how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launchi...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
Data scientists must access high-performance computing resources across a wide-area network. To achieve cloud-based HPC visualization, researchers must transfer datasets and visualization results efficiently. HPC clusters now compute GPU-accelerated visualization in the cloud cluster. To efficiently display results remotely, a high-performance, low-latency protocol transfers the display from the cluster to a remote desktop. Further, tools to easily mount remote datasets and efficiently transfer...
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo Silicon Valley which will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. "DevOps is at the intersection of technology and business-optimizing tools, organizations and processes to bring measurable improvements in productivity and profitability," said Aruna Ravichandran, vice president, DevOps product and solutions marketing...
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering ...
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbui...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, will go over the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, applicatio...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their ass...
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.