Welcome!

Related Topics: SDN Journal, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo, @DXWorldExpo

SDN Journal: Article

Beyond SDN: Creating Focused and Useable Solutions

The real customers and end users want practical and usable solutions, not definitions

Software Defined Networking (SDN) has become a famous paradigm and also the bandwagon in the networking industry today. SDN is primarily considered to be a methodology or approach to solving some of the wider-known problems in the enterprise and service provider networking space. It's also a tool to create some exciting new features today. The term "Software Defined Networking" provides a green-field opportunity for vendors to define, promote and customize it in their own way. End users don't care so much about the definition; they are more concerned about its contribution in optimizing and solving real problems.

The initial protocol that is considered to be a precursor to SDN is "OpenFlow." Open Networking Foundation (ONF) defines SDN as a new approach to networking, whereby, network control is decoupled from the data-forwarding function and is directly programmable. OpenFlow allows the traditional layer 2 switches to examine headers in the packet/frame and make forwarding decisions. OpenFlow-supported switches examine the packet headers through the transport layers and can match more than 13 fields that span across layer 2 to layer 4.

How Exactly Is It Going to Be Useful?
There are some interesting use cases defined by various vendors that utilize the IP and TCP header look-up to make forwarding decisions. Even though these use cases are not fully established, they may be useful to perform traffic redirection and traffic engineering by merely using switches. Some practical uses of traffic engineering would be to isolate the malicious traffic at the switch level for further analysis and containment. Another example would be the ability to divert traffic through multiple ISP connections based on applications and specific computers (users). Many vendors are focusing on getting these use cases established by creating controllers and switches. Controllers push the rules onto the switches. Switches perform the packet processing, rule lookup and makes forwarding decisions. OpenFlow controllers and switches are considered to be the two main pieces of SDN by many vendors. Other software is currently being developed and promoted under the SDN umbrella such as Orchestration/Automation software.

Why Do We Need Orchestration/Automation Software?
Orchestration/automation software is primarily considered to be a component that sits on top of the controller and uses the controller's northbound APIs to execute sets of tasks in sequence based on events and monitoring. Usually these tasks are performed by scripts that run on either a time-bound or situation-bound way manually set in place by system administrators. As an example, scripts could be a weakened configuration script, a flash crowd-specific network, server configuration script, etc. It provides the ability to perform scenario-specific, time-specific, or business-policy-specific infrastructure setup and configuration. Orchestration software brings these scripts under a single umbrella of SDN and masks the error-prone programming needs from the system administrators to provide a user-friendly and easy-to-configure, easy-to-monitor graphical user interface.

One of the most important uses of Orchestration/Automation software is in cloud computing. The cloud is in essence a data center that runs services on top of physical servers directly or on virtual machines that share a single physical server and provides a user-friendly interface to manage the services, the virtual machines (VM), the servers and the whole infrastructure. The main idea behind consolidating the VMs on a single physical server is to maximize the utilization of the hardware resources that are invested and minimize the operational expenses (OPEX) such as energy costs by running the fewest possible physical servers for a given load. As loads increase, more VMs require enabling to balance the load and provide optimum service. Hardware virtualization software (hypervisors) makes the process of preserving a running operating system as a snapshot or image easy and automatic. When a snapshot is created as a virtual machine, it's important to get the underlying networking also reconfigured automatically. This is where OpenFlow comes into play to enable network virtualization.

Here's how it works. When the VM is booted up and sends the first Ethernet frame outbound, the switch captures it and sends layer 2 and layer 4 header information to the controller and checks where to forward the packets. Controller creates the dynamic "vlan-like" port grouping based on predefined policies using MAC addresses or IP addresses. Without any administrative intervention, the newly created VM is already part of the existing network and is part of the pre-configured load balancer server pool. This practical and exciting approach makes good use of the SDN. The automation is generally done through the hypervisor or management software that runs above the hypervisor. While this automation seems magical, there are some important points to consider.

What's the Catch?
Like expert magicians, SDN vendors misdirect the users about the features and opportunities of control and data plane separation while not revealing some important facts. When lots of promotional and inaccurate information about SDN prevails in the market, we should also learn to look behind the curtains to fully understand the price that is paid for the new features. When we look closely, the price of enabling OpenFlow is obvious; it's performance. Traditional switches are meant to look up the layer 2 fixed length headers. Conversely, OpenFlow switches look up variable length headers such as IP and TCP. While the effort to examine length-delimited lookup and parsing is obvious, there are some good readings that detail the performance penalties of handing variable length headers compared to fixed-length headers.

Although OpenFlow switches open up an exciting new approach and bring a huge momentum to the networking industry, the illusion of them replacing all the layer 2 switches will not hold up very well when you actually put them to test and compare the results. OpenFlow should complement the existing infrastructure and should not attempt to replace traditional switches since OpenFlow switches try to solve a different set of problems. Pricing what we pay to automatically detect the newly created VM or newly created application session is actually impacting the packet/frame forwarding performance immensely. While OpenFlow is still useful as traffic engineering and as a flow management tool, it should not be considered a replacement for a layer 2 switch. It's not just based on the OpenFlow protocol maturity at this point; it's based on its design itself.

Hidden Gem
One of the important aspects of the SDN that does not get much traction on the specifics is northbound APIs. While ‘application-oriented' and ‘application-defined' software and networking product promotions have been swamping the industry, this is really about engineering application traffic based on TCP port numbers. But correctly implemented northbound APIs can bridge the gap between the application and networking worlds. Industry brilliance should be applied to solve the real age-old problem: TCP. Applications utilize TCP. Application developers consider networking as a one big pipe of unlimited bandwidth and speed of light connectivity. Applications have limited visibility into the underlying networking or server infrastructure. In the SDN world, controller vendors are pondering and developing northbound APIs. Most controller developers are considering these APIs only as a CLI replacement. They are also viewing it as a southbound interface to another network automation or management software.

Let the Application Be the Controller
Think of the gravity hydro-dams. When counties around the state are requesting more water for irrigation, what happens if the dam's controller decides to honor every request for the needed amount? Should it open the water-gate to its fullest to serve all the required quantity without considering how much the distribution pipes can handle? Although most people will not think of doing this, this is exactly what is happening in the software world today.

When the application receives the incoming requests, it assumes the network has unlimited capacity and light-speed connectivity to the one making the request. Applications start creating packets by spending CPU, memory and disk resources. Later, the network optimization or QoS device finds out that the links are overused and decides to drop the packets to inform the applications to slow down. All of the resources consumed were not only going to waste, it also created more congestion on the network. Instead of using ancient smoke-signaling approaches like packet drops to inform the applications about the network congestion, SDN vendors should build more robust northbound APIs to provide more network visibility to the applications. It will be a paradigm shift in the way applications are developed. It will address the problem at its source. The promise relies on the simplicity and standardization of the northbound APIs.

Although the northbound APIs are not well defined and left for vendors to implement their own sets of rules, the power to make the SDN succeed lies in the northbound APIs. It is the real disruption in the industry not the data and control plane separation.

Northbound APIs for Policy Plane
As the controller's northbound API is to the underlying infrastructure, the needs for northbound APIs for the policy plane are also growing. Policies change all the time to align with business goals as they drive the infrastructure both directly and indirectly. When the policy plane also exposes the APIs for applications to consume the priorities and service level agreements (SLA), the same occurs between the forwarding plane and control plane today on the networking side.

Northbound APIs should allow the application to query the system, network, and server infrastructure to optimize the network globally. It should also be able to interact with the policy layer to get the priorities and SLA before committing to any resources. This will exceed the end user's investment on applications and networking infrastructure while avoiding shifting problems between each other and truly begin to collaborate and complement one another.

The real customers and end users want practical and usable solutions, not definitions. We should think beyond defining the jargon and start creating focused and useable solutions.

References

  1. http://www.cs.cmu.edu/~srini/15-744/F02/readings/McK97.html#3needswitch
  2. https://www.opennetworking.org/about/onf-overview

More Stories By Karthikeyan Subramaniam

Karthikeyan Subramaniam serves as the Chief Software Architect and Architect of the Company’s Software Defined Networking Platform. He led the development of the company’s SDN and Cloud Computing Platform work for Verizon, alongside Hewlett Packard, Intel Corp, the industry leading SDN platform unveiled at the Open Networking Summit, the world’s largest SDN summit. He has created and developed the Company’s platforms in Software Defined Networking, and Interoperability. He was at Intel Technology India, in Intel Server Systems and Intelligent Platform Management. He was at Cisco Offshore Development Center in Cisco’s Enterprises Management Business Unit (EMBU) in Cisco’s Voice Systems, Voice Gateways and Gatekeepers.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and ...
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
DXWorldEXPO LLC announced today that Dez Blanchfield joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Dez is a strategic leader in business and digital transformation with 25 years of experience in the IT and telecommunications industries developing strategies and implementing business initiatives. He has a breadth of expertise spanning technologies such as cloud computing, big data and analytics, cognitive computing, m...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
HyperConvergence came to market with the objective of being simple, flexible and to help drive down operating expenses. It reduced the footprint by bundling the compute/storage/network into one box. This brought a new set of challenges as the HyperConverged vendors are very focused on their own proprietary building blocks. If you want to scale in a certain way, let's say you identified a need for more storage and want to add a device that is not sold by the HyperConverged vendor, forget about it...
One of the biggest challenges with adopting a DevOps mentality is: new applications are easily adapted to cloud-native, microservice-based, or containerized architectures - they can be built for them - but old applications need complex refactoring. On the other hand, these new technologies can require relearning or adapting new, oftentimes more complex, methodologies and tools to be ready for production. In his general session at @DevOpsSummit at 20th Cloud Expo, Chris Brown, Solutions Marketi...
At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.