Welcome!

Related Topics: SDN Journal, Java IoT, Microservices Expo, Containers Expo Blog

SDN Journal: Blog Post

Scripting Is Automation, But Automation Is Not Scripting

There are many extremely complex clustered applications that rely entirely on exchanging information through APIs

Last week Greg Ferro (@etherealmind) wrote this article about his experience with scripting as a method for network automation, with the ultimate conclusion that scripting does not scale.

Early in my career I managed a small network that grew to be a IP over X.25 hub of Europe for a few years providing many countries with their first Internet connectivity. Scripts were everywhere, small ones to grab stats and create pretty graphs, others that continuously checked the status of links and would send emails when things went wrong.

While it is hard to argue with Greg’s complaints per se, I believe the key point is missing. And it has nothing to do with scripting. In a reply, Ivan’s last comment touches on the real issue.

We have been scripting our networks against CLIs forever and I will bet you most folks will consider it successful, even though it may be a pain. The article lists the pains, but not the reasons why. As a network industry, we have never ever considered the interaction with our network devices an API. Not in the true software engineering sense of an API.

There are many extremely complex clustered applications that rely entirely on exchanging information through APIs that are well documented, well versioned, well abstracted and properly promoted or deprecated. Creating and maintaining APIs is a real software engineering effort, a skill that requires true architecture, engineering and discipline. And we have not given our users anything close to it.

If we (that collective network industry) had truly considered our CLI an API, we would (and should) have been pushed aside a long time ago. The CLI is and always has been a simple interface for a human to tell a device what to do. It was not designed to be automated. It is not structured enough to be automated. Even large vendors have multiple flavors that are all industry standard, but all slightly different. And nowhere would you find a formal, full and complete dictionary of that CLI with all inputs, outputs, versions and options. The closest the network industry has had to a true API is SNMP, and that is indeed a very sad statement.

I think we have mentioned before that the networking industry is a bit slow to get to modern software engineering methods and practices, but the tide is changing. And whether you want to call it SDN or something else, the sheer volume and complexity of interaction with the network is pushing us to provide truly automated access to our devices and our networks.

And creating and maintaining APIs is far more than the technology used to access them. It does not matter whether its XML, JSON, REST, NETCONF or anything else. Those are definitions of how information is carried to and from the device and network. I can build a wonderful REST API that takes a CLI command as an argument and spits me back the output from that CLI command in some format. I am sure that sounds familiar to some, but this is not an API. Not in a truly meaningful way that would elevate our automation abilities.

Designing and implementing APIs is not trivial. Believe me, as an entirely API driven solution, we spend a tremendous amount of time discussing our APIs and abstractions to make sure they find that find balance between granularity, functionality, abstraction, scaling and a few other relevant qualifiers.  But the key is that they are part of any feature design from day one, they are part of the overarching architecture, not bolted on at the end. Our APIs are not perfect, there is no such thing, but they are modeled after the workflow of you the user doing the work required to keep the network running and thriving.

So when you need to configure MLAG on a set of Plexxi switches, we do not have a series of API calls to bundle ports together on a switch, give them a unique ID, then tie the switches together as an MLAG pair that shares that unique ID. Oh, and create an MLAG control channel between them, and make each of the switch local LAGs have the same set of VLANs on them. Our API will simply take a list of port objects from any amount of switches in a Plexxi network and turn them into an MLAG. An then you can simply take that entire entity and stick a VLAN on top, we will make sure the participating switches get the pieces they need. That is abstraction, that is workflow encapsulation, that is what APIs are supposed to give you. That is how simple LAG is supposed to be.

We have a long way to go as an industry to get to full APIs the way real software folks think about APIs. The CLI is not it. Scripting against a CLI (or a CLI hidden behind a layer of official sounding API terms) is a useful tool, but one that should be mostly retired to get to true programmable networks that are controlled by real controller (in the broadest definition of the word) using real APIs. Automation is not scripting.

[Today's fun fact: to make sure you do not think I am anti scripting, I once wrote a large chunk of a 10,000 line Perl4 system. It functioned very nicely for years as the RIPE database for IP address allocations back in the mid 90s. Thankfully it has since been tackled by real software engineers.]

The post Scripting is automation, but automation is not scripting appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

Latest Stories
Evan Kirstel is an internationally recognized thought leader and social media influencer in IoT (#1 in 2017), Cloud, Data Security (2016), Health Tech (#9 in 2017), Digital Health (#6 in 2016), B2B Marketing (#5 in 2015), AI, Smart Home, Digital (2017), IIoT (#1 in 2017) and Telecom/Wireless/5G. His connections are a "Who's Who" in these technologies, He is in the top 10 most mentioned/re-tweeted by CMOs and CIOs (2016) and have been recently named 5th most influential B2B marketeer in the US. H...
DXWorldEXPO LLC announced today that Telecom Reseller has been named "Media Sponsor" of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
Transformation Abstract Encryption and privacy in the cloud is a daunting yet essential task for both security practitioners and application developers, especially as applications continue moving to the cloud at an exponential rate. What are some best practices and processes for enterprises to follow that balance both security and ease of use requirements? What technologies are available to empower enterprises with code, data and key protection from cloud providers, system administrators, inside...
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Digital transformation has increased the pace of business creating a productivity divide between the technology haves and have nots. Managing financial information on spreadsheets and piecing together insight from numerous disconnected systems is no longer an option. Rapid market changes and aggressive competition are motivating business leaders to reevaluate legacy technology investments in search of modern technologies to achieve greater agility, reduced costs and organizational efficiencies. ...
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Le...
Machine learning provides predictive models which a business can apply in countless ways to better understand its customers and operations. Since machine learning was first developed with flat, tabular data in mind, it is still not widely understood: when does it make sense to use graph databases and machine learning in combination? This talk tackles the question from two ends: classifying predictive analytics methods and assessing graph database attributes. It also examines the ongoing lifecycl...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the ste...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to great conferences, helping you discover new conferences and increase your return on investment.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...