Welcome!

Blog Feed Post

NetFlow vs. sFlow for Network Monitoring and Security: The Final Say

We’ve blogged about the differences between NetFlow and sFlow before but this debate continues to come up often enough and has been going on long enough that it needs to be put to rest once and for all. So let’s cut right to the chase:

The only people that ever say “sFlow is better than NetFlow” are those that haven’t used both and seen the difference for themselves.

This statements isn’t based on a bias toward a particular vendor or any kind of business driver but rather opinions formed over years of operational experience with both flow technologies.

Most customers that I’ve talked to, and there have been hundreds over the years, want sFlow to “just behave like NetFlow”. In fact many customers, when faced with the prospect of sampled data, will deploy NetFlow generators (sometimes called “flow probes”) such as nBox or Cisco’s NGA to create NetFlow based on SPAN ports rather than deal with the difficulties sFlow presents.

Don’t get me wrong, NetFlow and IPFIX have their challenges. NetFlow caching and export mechanics are more difficult to implement correctly, require more resources to operate especially as it relates to memory, and implementations in the field vary wildly. NetFlow is a victim of its own popularity. Everyone wants to add NetFlow-like support to their routers, switches, firewalls, load-balancers, and WAN optimizers but they don’t always stop and check with vendors like Plixer ahead of time to ensure the resulting export will work correctly. Ellen’s post from last week illustrates this point. sFlow doesn’t have a problem with standardization primarily because so few vendors (that matter) have implemented it.

But NetFlow’s benefits far outweigh it’s few shortcomings:

The full story

While sampling is available for NetFlow it’s not a requirement. People just don’t like sampled data. Especially security people. Sure if you get enough packets over a long enough period you can work out the proper traffic levels but when you’re trying to hunt down an intrusion that occured over a single two minute HTTP SQL injection you need the full story. Sampling technology simply doesn’t provide the full story. It’s like only reading every 128th word in a novel. At the end of the story you have a vague outline of what happened and who the characters were, but little more. Security analysts don’t want 1 out of every 128 packets, they want one out of every one.

Better collector support

Here at Plixer we spend approximately 95% of our day working with NetFlow/IPFIX customers. Just about everybody supports either NetFlow or IPFIX so the majority of the feature requests, bugs submissions, and support calls are related to NetFlow/IPFIX. Over time this has forced us to fine tune our NetFlow support. Sure, we still support sFlow and it works as well as can be expected but it simply hasn’t had as much attention as NetFlow/IPFIX has. How could it? We are customer driven and the customers use NetFlow.

Better vendor support

In addition, many vendors have moved NetFlow processing into hardware in all of their newer product lines. Examples include Cisco’s Cat6k w/Sup2T or the Catalyst 4500E w/Sup7E. Even non-Cisco companies are innovating on the NetFlow front. Enterasys has added powerful hardware-based NetFlow support to their S Series and K Series switches. Hardware-based NetFlow removes one of NetFlow naysayers main complaints: impact on CPU.

More fields, more innovation

You have to give it to Cisco on this one, they have really put a lot of effort into NetFlow over the last few years. Flexible NetFlow, NBAR, MediaNet, ASA NAT export, PfR, the list of extended fields goes on and on.

sFlow has nothing like any of this and I’m not aware of any work to make it better. If you do know of some recent sFlow advancements that catch it up to NetFlow/IPFIX drop a comment or shoot me an email. I would love to hear about it.

Firewalls have adopted NetFlow

Firewalls tend to be located in places where visibility is most needed: at aggregation points and key access control locations often separating critical from untrusted assets. NetFlow is *perfect* for measuring and monitoring key points in the network. It took a while but the firewall vendors finally figured this out and now firewalls are among the most recent adopters of NetFlow/IPFIX support. With the exception of Fortinet, every firewall vendor thus far has chosen either NetFlow or IPFIX. The list includes: PaloAlto, CheckPoint, SonicWall, and of course Cisco’s ASA. Why do firewalls prefer NetFlow? Because their customers want the full story, not samples.

NetFlow/IPFIX works well for all event types, not just TCP/IP

sFlow is fundamentally oriented around Ethernet frames. At its heart sFlow is designed to sample packets. Nothing more, nothing less. IPFIX and NetFlow v9 are incredibly flexible. While most implementations revolve around a source and destination IP address, it’s not a requirement of the protocols. NetFlow/IPFIX can be used to export any kind of structured data. In fact, IPFIX’s variable length fields can even be used to export semi-unstructured data such as URLs, proprietary vendor strings, or hostnames.

Multiple templates can be used to export several different data sets simultaneously. The infrastructure vendor community is only now beginning to understand the potential power behind IPFIX data export. sFlow just isn’t flexible enough to accomodate the evolving role of flows in enterprise and service provider networks. And finally, while I respect the effort by the folks at sFlow.org. sFlow doesn’t have the research following that IPFIX can claim. Individuals such as Benoit Claise are driving the standardization of flow exports and consistently adding to the “how can we make flows better” discussion.

NetFlow works well over a WAN, sFlow doesn’t

The dirty little secret about sFlow that everyone likes to ignore is that the amount of sFlow leaving the router is directly proportional to the packets per second rate. This means that the higher the bps rate at a remote site, the higher the sFlow record rate leaving the site destined for the collector. NetFlow on the other hand is based on the number of active connections, not the packet rate. If I transfer a 500MB file from A to B sFlow will create *several thousand* packet samples to represent the transfer while NetFlow will create only two 46 byte NetFlow entries. This is a deal breaker for most customers. It’s entirely possible to saturate a WAN link with sFlow samples. And since sFlow runs over connectionless, uni-directional UDP, there is no way to tell the exporter to slow down.

Don’t get me wrong…

If all you have are sFlow-enabled devices you should still look into turning on sFlow exports. It’ll give you traffic stats and other bits of info that are better than nothing at all. It’s just that when I see comments like this show up I wonder who these people are and where they’ve been. The NetFlow vs. sFlow war is over. Someone should let them know. Perhaps this will help.

Not convinced? You be the judge. Plixer’s Scrutinizer flow collector accepts both NetFlow and sFlow. Download a free trial here and try it out yourself.

Read the original blog entry...

More Stories By Michael Patterson

Michael Patterson, is the founder & CEO of Plixer and the product manager for Scrutinizer NetFlow and sFlow Analyzer. Prior to starting Somix and Plixer, Mike worked in a technical support role at Cabletron Systems, acquired his Novell CNE and then moved to the training department for a few years. While in training he finished his Masters in Computer Information Systems from Southern New Hampshire University and then left technical training to pursue a new skill set in Professional Services. In 1998 he left the 'Tron' to start Somix and Plixer.

Latest Stories
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He expla...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices t...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...