Welcome!

Related Topics: SDN Journal, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo, @DXWorldExpo

SDN Journal: Article

Beyond SDN: Creating Focused and Useable Solutions

The real customers and end users want practical and usable solutions, not definitions

Software Defined Networking (SDN) has become a famous paradigm and also the bandwagon in the networking industry today. SDN is primarily considered to be a methodology or approach to solving some of the wider-known problems in the enterprise and service provider networking space. It's also a tool to create some exciting new features today. The term "Software Defined Networking" provides a green-field opportunity for vendors to define, promote and customize it in their own way. End users don't care so much about the definition; they are more concerned about its contribution in optimizing and solving real problems.

The initial protocol that is considered to be a precursor to SDN is "OpenFlow." Open Networking Foundation (ONF) defines SDN as a new approach to networking, whereby, network control is decoupled from the data-forwarding function and is directly programmable. OpenFlow allows the traditional layer 2 switches to examine headers in the packet/frame and make forwarding decisions. OpenFlow-supported switches examine the packet headers through the transport layers and can match more than 13 fields that span across layer 2 to layer 4.

How Exactly Is It Going to Be Useful?
There are some interesting use cases defined by various vendors that utilize the IP and TCP header look-up to make forwarding decisions. Even though these use cases are not fully established, they may be useful to perform traffic redirection and traffic engineering by merely using switches. Some practical uses of traffic engineering would be to isolate the malicious traffic at the switch level for further analysis and containment. Another example would be the ability to divert traffic through multiple ISP connections based on applications and specific computers (users). Many vendors are focusing on getting these use cases established by creating controllers and switches. Controllers push the rules onto the switches. Switches perform the packet processing, rule lookup and makes forwarding decisions. OpenFlow controllers and switches are considered to be the two main pieces of SDN by many vendors. Other software is currently being developed and promoted under the SDN umbrella such as Orchestration/Automation software.

Why Do We Need Orchestration/Automation Software?
Orchestration/automation software is primarily considered to be a component that sits on top of the controller and uses the controller's northbound APIs to execute sets of tasks in sequence based on events and monitoring. Usually these tasks are performed by scripts that run on either a time-bound or situation-bound way manually set in place by system administrators. As an example, scripts could be a weakened configuration script, a flash crowd-specific network, server configuration script, etc. It provides the ability to perform scenario-specific, time-specific, or business-policy-specific infrastructure setup and configuration. Orchestration software brings these scripts under a single umbrella of SDN and masks the error-prone programming needs from the system administrators to provide a user-friendly and easy-to-configure, easy-to-monitor graphical user interface.

One of the most important uses of Orchestration/Automation software is in cloud computing. The cloud is in essence a data center that runs services on top of physical servers directly or on virtual machines that share a single physical server and provides a user-friendly interface to manage the services, the virtual machines (VM), the servers and the whole infrastructure. The main idea behind consolidating the VMs on a single physical server is to maximize the utilization of the hardware resources that are invested and minimize the operational expenses (OPEX) such as energy costs by running the fewest possible physical servers for a given load. As loads increase, more VMs require enabling to balance the load and provide optimum service. Hardware virtualization software (hypervisors) makes the process of preserving a running operating system as a snapshot or image easy and automatic. When a snapshot is created as a virtual machine, it's important to get the underlying networking also reconfigured automatically. This is where OpenFlow comes into play to enable network virtualization.

Here's how it works. When the VM is booted up and sends the first Ethernet frame outbound, the switch captures it and sends layer 2 and layer 4 header information to the controller and checks where to forward the packets. Controller creates the dynamic "vlan-like" port grouping based on predefined policies using MAC addresses or IP addresses. Without any administrative intervention, the newly created VM is already part of the existing network and is part of the pre-configured load balancer server pool. This practical and exciting approach makes good use of the SDN. The automation is generally done through the hypervisor or management software that runs above the hypervisor. While this automation seems magical, there are some important points to consider.

What's the Catch?
Like expert magicians, SDN vendors misdirect the users about the features and opportunities of control and data plane separation while not revealing some important facts. When lots of promotional and inaccurate information about SDN prevails in the market, we should also learn to look behind the curtains to fully understand the price that is paid for the new features. When we look closely, the price of enabling OpenFlow is obvious; it's performance. Traditional switches are meant to look up the layer 2 fixed length headers. Conversely, OpenFlow switches look up variable length headers such as IP and TCP. While the effort to examine length-delimited lookup and parsing is obvious, there are some good readings that detail the performance penalties of handing variable length headers compared to fixed-length headers.

Although OpenFlow switches open up an exciting new approach and bring a huge momentum to the networking industry, the illusion of them replacing all the layer 2 switches will not hold up very well when you actually put them to test and compare the results. OpenFlow should complement the existing infrastructure and should not attempt to replace traditional switches since OpenFlow switches try to solve a different set of problems. Pricing what we pay to automatically detect the newly created VM or newly created application session is actually impacting the packet/frame forwarding performance immensely. While OpenFlow is still useful as traffic engineering and as a flow management tool, it should not be considered a replacement for a layer 2 switch. It's not just based on the OpenFlow protocol maturity at this point; it's based on its design itself.

Hidden Gem
One of the important aspects of the SDN that does not get much traction on the specifics is northbound APIs. While ‘application-oriented' and ‘application-defined' software and networking product promotions have been swamping the industry, this is really about engineering application traffic based on TCP port numbers. But correctly implemented northbound APIs can bridge the gap between the application and networking worlds. Industry brilliance should be applied to solve the real age-old problem: TCP. Applications utilize TCP. Application developers consider networking as a one big pipe of unlimited bandwidth and speed of light connectivity. Applications have limited visibility into the underlying networking or server infrastructure. In the SDN world, controller vendors are pondering and developing northbound APIs. Most controller developers are considering these APIs only as a CLI replacement. They are also viewing it as a southbound interface to another network automation or management software.

Let the Application Be the Controller
Think of the gravity hydro-dams. When counties around the state are requesting more water for irrigation, what happens if the dam's controller decides to honor every request for the needed amount? Should it open the water-gate to its fullest to serve all the required quantity without considering how much the distribution pipes can handle? Although most people will not think of doing this, this is exactly what is happening in the software world today.

When the application receives the incoming requests, it assumes the network has unlimited capacity and light-speed connectivity to the one making the request. Applications start creating packets by spending CPU, memory and disk resources. Later, the network optimization or QoS device finds out that the links are overused and decides to drop the packets to inform the applications to slow down. All of the resources consumed were not only going to waste, it also created more congestion on the network. Instead of using ancient smoke-signaling approaches like packet drops to inform the applications about the network congestion, SDN vendors should build more robust northbound APIs to provide more network visibility to the applications. It will be a paradigm shift in the way applications are developed. It will address the problem at its source. The promise relies on the simplicity and standardization of the northbound APIs.

Although the northbound APIs are not well defined and left for vendors to implement their own sets of rules, the power to make the SDN succeed lies in the northbound APIs. It is the real disruption in the industry not the data and control plane separation.

Northbound APIs for Policy Plane
As the controller's northbound API is to the underlying infrastructure, the needs for northbound APIs for the policy plane are also growing. Policies change all the time to align with business goals as they drive the infrastructure both directly and indirectly. When the policy plane also exposes the APIs for applications to consume the priorities and service level agreements (SLA), the same occurs between the forwarding plane and control plane today on the networking side.

Northbound APIs should allow the application to query the system, network, and server infrastructure to optimize the network globally. It should also be able to interact with the policy layer to get the priorities and SLA before committing to any resources. This will exceed the end user's investment on applications and networking infrastructure while avoiding shifting problems between each other and truly begin to collaborate and complement one another.

The real customers and end users want practical and usable solutions, not definitions. We should think beyond defining the jargon and start creating focused and useable solutions.

References

  1. http://www.cs.cmu.edu/~srini/15-744/F02/readings/McK97.html#3needswitch
  2. https://www.opennetworking.org/about/onf-overview

More Stories By Karthikeyan Subramaniam

Karthikeyan Subramaniam serves as the Chief Software Architect and Architect of the Company’s Software Defined Networking Platform. He led the development of the company’s SDN and Cloud Computing Platform work for Verizon, alongside Hewlett Packard, Intel Corp, the industry leading SDN platform unveiled at the Open Networking Summit, the world’s largest SDN summit. He has created and developed the Company’s platforms in Software Defined Networking, and Interoperability. He was at Intel Technology India, in Intel Server Systems and Intelligent Platform Management. He was at Cisco Offshore Development Center in Cisco’s Enterprises Management Business Unit (EMBU) in Cisco’s Voice Systems, Voice Gateways and Gatekeepers.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Security, data privacy, reliability and regulatory compliance are critical factors when evaluating whether to move business applications from in-house client hosted environments to a cloud platform. In her session at 18th Cloud Expo, Vandana Viswanathan, Associate Director at Cognizant, In this session, will provide an orientation to the five stages required to implement a cloud hosted solution validation strategy.
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
"DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great team at @DevOpsSUMMIT and CloudEXPO tell the world how they can leverage this emerging disruptive trend."
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
DXWorldEXPO LLC announced today that ICC-USA, a computer systems integrator and server manufacturing company focused on developing products and product appliances, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City. ICC is a computer systems integrator and server manufacturing company focused on developing products and product appliances to meet a wide range of ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...