Welcome!

Related Topics: SDN Journal, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo, @BigDataExpo

SDN Journal: Article

Beyond SDN: Creating Focused and Useable Solutions

The real customers and end users want practical and usable solutions, not definitions

Software Defined Networking (SDN) has become a famous paradigm and also the bandwagon in the networking industry today. SDN is primarily considered to be a methodology or approach to solving some of the wider-known problems in the enterprise and service provider networking space. It's also a tool to create some exciting new features today. The term "Software Defined Networking" provides a green-field opportunity for vendors to define, promote and customize it in their own way. End users don't care so much about the definition; they are more concerned about its contribution in optimizing and solving real problems.

The initial protocol that is considered to be a precursor to SDN is "OpenFlow." Open Networking Foundation (ONF) defines SDN as a new approach to networking, whereby, network control is decoupled from the data-forwarding function and is directly programmable. OpenFlow allows the traditional layer 2 switches to examine headers in the packet/frame and make forwarding decisions. OpenFlow-supported switches examine the packet headers through the transport layers and can match more than 13 fields that span across layer 2 to layer 4.

How Exactly Is It Going to Be Useful?
There are some interesting use cases defined by various vendors that utilize the IP and TCP header look-up to make forwarding decisions. Even though these use cases are not fully established, they may be useful to perform traffic redirection and traffic engineering by merely using switches. Some practical uses of traffic engineering would be to isolate the malicious traffic at the switch level for further analysis and containment. Another example would be the ability to divert traffic through multiple ISP connections based on applications and specific computers (users). Many vendors are focusing on getting these use cases established by creating controllers and switches. Controllers push the rules onto the switches. Switches perform the packet processing, rule lookup and makes forwarding decisions. OpenFlow controllers and switches are considered to be the two main pieces of SDN by many vendors. Other software is currently being developed and promoted under the SDN umbrella such as Orchestration/Automation software.

Why Do We Need Orchestration/Automation Software?
Orchestration/automation software is primarily considered to be a component that sits on top of the controller and uses the controller's northbound APIs to execute sets of tasks in sequence based on events and monitoring. Usually these tasks are performed by scripts that run on either a time-bound or situation-bound way manually set in place by system administrators. As an example, scripts could be a weakened configuration script, a flash crowd-specific network, server configuration script, etc. It provides the ability to perform scenario-specific, time-specific, or business-policy-specific infrastructure setup and configuration. Orchestration software brings these scripts under a single umbrella of SDN and masks the error-prone programming needs from the system administrators to provide a user-friendly and easy-to-configure, easy-to-monitor graphical user interface.

One of the most important uses of Orchestration/Automation software is in cloud computing. The cloud is in essence a data center that runs services on top of physical servers directly or on virtual machines that share a single physical server and provides a user-friendly interface to manage the services, the virtual machines (VM), the servers and the whole infrastructure. The main idea behind consolidating the VMs on a single physical server is to maximize the utilization of the hardware resources that are invested and minimize the operational expenses (OPEX) such as energy costs by running the fewest possible physical servers for a given load. As loads increase, more VMs require enabling to balance the load and provide optimum service. Hardware virtualization software (hypervisors) makes the process of preserving a running operating system as a snapshot or image easy and automatic. When a snapshot is created as a virtual machine, it's important to get the underlying networking also reconfigured automatically. This is where OpenFlow comes into play to enable network virtualization.

Here's how it works. When the VM is booted up and sends the first Ethernet frame outbound, the switch captures it and sends layer 2 and layer 4 header information to the controller and checks where to forward the packets. Controller creates the dynamic "vlan-like" port grouping based on predefined policies using MAC addresses or IP addresses. Without any administrative intervention, the newly created VM is already part of the existing network and is part of the pre-configured load balancer server pool. This practical and exciting approach makes good use of the SDN. The automation is generally done through the hypervisor or management software that runs above the hypervisor. While this automation seems magical, there are some important points to consider.

What's the Catch?
Like expert magicians, SDN vendors misdirect the users about the features and opportunities of control and data plane separation while not revealing some important facts. When lots of promotional and inaccurate information about SDN prevails in the market, we should also learn to look behind the curtains to fully understand the price that is paid for the new features. When we look closely, the price of enabling OpenFlow is obvious; it's performance. Traditional switches are meant to look up the layer 2 fixed length headers. Conversely, OpenFlow switches look up variable length headers such as IP and TCP. While the effort to examine length-delimited lookup and parsing is obvious, there are some good readings that detail the performance penalties of handing variable length headers compared to fixed-length headers.

Although OpenFlow switches open up an exciting new approach and bring a huge momentum to the networking industry, the illusion of them replacing all the layer 2 switches will not hold up very well when you actually put them to test and compare the results. OpenFlow should complement the existing infrastructure and should not attempt to replace traditional switches since OpenFlow switches try to solve a different set of problems. Pricing what we pay to automatically detect the newly created VM or newly created application session is actually impacting the packet/frame forwarding performance immensely. While OpenFlow is still useful as traffic engineering and as a flow management tool, it should not be considered a replacement for a layer 2 switch. It's not just based on the OpenFlow protocol maturity at this point; it's based on its design itself.

Hidden Gem
One of the important aspects of the SDN that does not get much traction on the specifics is northbound APIs. While ‘application-oriented' and ‘application-defined' software and networking product promotions have been swamping the industry, this is really about engineering application traffic based on TCP port numbers. But correctly implemented northbound APIs can bridge the gap between the application and networking worlds. Industry brilliance should be applied to solve the real age-old problem: TCP. Applications utilize TCP. Application developers consider networking as a one big pipe of unlimited bandwidth and speed of light connectivity. Applications have limited visibility into the underlying networking or server infrastructure. In the SDN world, controller vendors are pondering and developing northbound APIs. Most controller developers are considering these APIs only as a CLI replacement. They are also viewing it as a southbound interface to another network automation or management software.

Let the Application Be the Controller
Think of the gravity hydro-dams. When counties around the state are requesting more water for irrigation, what happens if the dam's controller decides to honor every request for the needed amount? Should it open the water-gate to its fullest to serve all the required quantity without considering how much the distribution pipes can handle? Although most people will not think of doing this, this is exactly what is happening in the software world today.

When the application receives the incoming requests, it assumes the network has unlimited capacity and light-speed connectivity to the one making the request. Applications start creating packets by spending CPU, memory and disk resources. Later, the network optimization or QoS device finds out that the links are overused and decides to drop the packets to inform the applications to slow down. All of the resources consumed were not only going to waste, it also created more congestion on the network. Instead of using ancient smoke-signaling approaches like packet drops to inform the applications about the network congestion, SDN vendors should build more robust northbound APIs to provide more network visibility to the applications. It will be a paradigm shift in the way applications are developed. It will address the problem at its source. The promise relies on the simplicity and standardization of the northbound APIs.

Although the northbound APIs are not well defined and left for vendors to implement their own sets of rules, the power to make the SDN succeed lies in the northbound APIs. It is the real disruption in the industry not the data and control plane separation.

Northbound APIs for Policy Plane
As the controller's northbound API is to the underlying infrastructure, the needs for northbound APIs for the policy plane are also growing. Policies change all the time to align with business goals as they drive the infrastructure both directly and indirectly. When the policy plane also exposes the APIs for applications to consume the priorities and service level agreements (SLA), the same occurs between the forwarding plane and control plane today on the networking side.

Northbound APIs should allow the application to query the system, network, and server infrastructure to optimize the network globally. It should also be able to interact with the policy layer to get the priorities and SLA before committing to any resources. This will exceed the end user's investment on applications and networking infrastructure while avoiding shifting problems between each other and truly begin to collaborate and complement one another.

The real customers and end users want practical and usable solutions, not definitions. We should think beyond defining the jargon and start creating focused and useable solutions.

References

  1. http://www.cs.cmu.edu/~srini/15-744/F02/readings/McK97.html#3needswitch
  2. https://www.opennetworking.org/about/onf-overview

More Stories By Karthikeyan Subramaniam

Karthikeyan Subramaniam serves as the Chief Software Architect and Architect of the Company’s Software Defined Networking Platform. He led the development of the company’s SDN and Cloud Computing Platform work for Verizon, alongside Hewlett Packard, Intel Corp, the industry leading SDN platform unveiled at the Open Networking Summit, the world’s largest SDN summit. He has created and developed the Company’s platforms in Software Defined Networking, and Interoperability. He was at Intel Technology India, in Intel Server Systems and Intelligent Platform Management. He was at Cisco Offshore Development Center in Cisco’s Enterprises Management Business Unit (EMBU) in Cisco’s Voice Systems, Voice Gateways and Gatekeepers.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
DevOps and microservices are permeating software engineering teams broadly, whether these teams are in pure software shops but happen to run a business, such Uber and Airbnb, or in companies that rely heavily on software to run more traditional business, such as financial firms or high-end manufacturers. Microservices and DevOps have created software development and therefore business speed and agility benefits, but they have also created problems; specifically, they have created software securi...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of Soli...
Almost two-thirds of companies either have or soon will have IoT as the backbone of their business. Though, IoT is far more complex than most firms expected with a majority of IoT projects having failed. How can you not get trapped in the pitfalls? In his session at @ThingsExpo, Tony Shan, Chief IoTologist at Wipro, will introduce a holistic method of IoTification, which is the process of IoTifying the existing technology portfolios and business models to adopt and leverage IoT. He will delve in...
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
Cloud Expo, Inc. has announced today that Aruna Ravichandran, vice president of DevOps Product and Solutions Marketing at CA Technologies, has been named co-conference chair of DevOps at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at @ThingsExpo, Steve Wilkes, CTO and founder of Striim, will delve into four enterprise-scale, business-critical case studies where streaming analytics serves as the key to enabling real-time data integration and right-time insights in hybrid cloud, IoT, and fog computing environments. As part of this discussion, he will also present a demo based on its partnership with Fujitsu, highlighting their technologies in a healthcare IoT use-case. The demo showcases the tracking of pati...
Tricky charts and visually deceptive graphs often make a case for the impact IT performance has on business. The debate isn't around the obvious; of course, IT performance metrics like website load time influence business metrics such as conversions and revenue. Rather, this presentation will explore various data analysis concepts to understand how, and how not to, assert such correlations. In his session at 20th Cloud Expo, Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Sys...
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
Stratoscale, the software company developing the next generation data center operating system, exhibited at SYS-CON's 18th International Cloud Expo®, which took place at the Javits Center in New York City, NY, in June 2016.Stratoscale is revolutionizing the data center with a zero-to-cloud-in-minutes solution. With Stratoscale’s hardware-agnostic, Software Defined Data Center (SDDC) solution to store everything, run anything and scale everywhere, IT is empowered to take control of their data ce...
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his Day 2 Keynote at @ThingsExpo, Henrik Kenani Dahlgren, Portfolio Marketing Manager at Ericsson, discussed how to plan to cooperate, partner, and form lasting all-star teams to change the...
What are the new priorities for the connected business? First: businesses need to think differently about the types of connections they will need to make – these span well beyond the traditional app to app into more modern forms of integration including SaaS integrations, mobile integrations, APIs, device integration and Big Data integration. It’s important these are unified together vs. doing them all piecemeal. Second, these types of connections need to be simple to design, adapt and configure...
To manage complex web services with lots of calls to the cloud, many businesses have invested in Application Performance Management (APM) and Network Performance Management (NPM) tools. Together APM and NPM tools are essential aids in improving a business's infrastructure required to support an effective web experience... but they are missing a critical component - Internet visibility.