Welcome!

Blog Feed Post

NetFlow vs. sFlow for Network Monitoring and Security: The Final Say

We’ve blogged about the differences between NetFlow and sFlow before but this debate continues to come up often enough and has been going on long enough that it needs to be put to rest once and for all. So let’s cut right to the chase:

The only people that ever say “sFlow is better than NetFlow” are those that haven’t used both and seen the difference for themselves.

This statements isn’t based on a bias toward a particular vendor or any kind of business driver but rather opinions formed over years of operational experience with both flow technologies.

Most customers that I’ve talked to, and there have been hundreds over the years, want sFlow to “just behave like NetFlow”. In fact many customers, when faced with the prospect of sampled data, will deploy NetFlow generators (sometimes called “flow probes”) such as nBox or Cisco’s NGA to create NetFlow based on SPAN ports rather than deal with the difficulties sFlow presents.

Don’t get me wrong, NetFlow and IPFIX have their challenges. NetFlow caching and export mechanics are more difficult to implement correctly, require more resources to operate especially as it relates to memory, and implementations in the field vary wildly. NetFlow is a victim of its own popularity. Everyone wants to add NetFlow-like support to their routers, switches, firewalls, load-balancers, and WAN optimizers but they don’t always stop and check with vendors like Plixer ahead of time to ensure the resulting export will work correctly. Ellen’s post from last week illustrates this point. sFlow doesn’t have a problem with standardization primarily because so few vendors (that matter) have implemented it.

But NetFlow’s benefits far outweigh it’s few shortcomings:

The full story

While sampling is available for NetFlow it’s not a requirement. People just don’t like sampled data. Especially security people. Sure if you get enough packets over a long enough period you can work out the proper traffic levels but when you’re trying to hunt down an intrusion that occured over a single two minute HTTP SQL injection you need the full story. Sampling technology simply doesn’t provide the full story. It’s like only reading every 128th word in a novel. At the end of the story you have a vague outline of what happened and who the characters were, but little more. Security analysts don’t want 1 out of every 128 packets, they want one out of every one.

Better collector support

Here at Plixer we spend approximately 95% of our day working with NetFlow/IPFIX customers. Just about everybody supports either NetFlow or IPFIX so the majority of the feature requests, bugs submissions, and support calls are related to NetFlow/IPFIX. Over time this has forced us to fine tune our NetFlow support. Sure, we still support sFlow and it works as well as can be expected but it simply hasn’t had as much attention as NetFlow/IPFIX has. How could it? We are customer driven and the customers use NetFlow.

Better vendor support

In addition, many vendors have moved NetFlow processing into hardware in all of their newer product lines. Examples include Cisco’s Cat6k w/Sup2T or the Catalyst 4500E w/Sup7E. Even non-Cisco companies are innovating on the NetFlow front. Enterasys has added powerful hardware-based NetFlow support to their S Series and K Series switches. Hardware-based NetFlow removes one of NetFlow naysayers main complaints: impact on CPU.

More fields, more innovation

You have to give it to Cisco on this one, they have really put a lot of effort into NetFlow over the last few years. Flexible NetFlow, NBAR, MediaNet, ASA NAT export, PfR, the list of extended fields goes on and on.

sFlow has nothing like any of this and I’m not aware of any work to make it better. If you do know of some recent sFlow advancements that catch it up to NetFlow/IPFIX drop a comment or shoot me an email. I would love to hear about it.

Firewalls have adopted NetFlow

Firewalls tend to be located in places where visibility is most needed: at aggregation points and key access control locations often separating critical from untrusted assets. NetFlow is *perfect* for measuring and monitoring key points in the network. It took a while but the firewall vendors finally figured this out and now firewalls are among the most recent adopters of NetFlow/IPFIX support. With the exception of Fortinet, every firewall vendor thus far has chosen either NetFlow or IPFIX. The list includes: PaloAlto, CheckPoint, SonicWall, and of course Cisco’s ASA. Why do firewalls prefer NetFlow? Because their customers want the full story, not samples.

NetFlow/IPFIX works well for all event types, not just TCP/IP

sFlow is fundamentally oriented around Ethernet frames. At its heart sFlow is designed to sample packets. Nothing more, nothing less. IPFIX and NetFlow v9 are incredibly flexible. While most implementations revolve around a source and destination IP address, it’s not a requirement of the protocols. NetFlow/IPFIX can be used to export any kind of structured data. In fact, IPFIX’s variable length fields can even be used to export semi-unstructured data such as URLs, proprietary vendor strings, or hostnames.

Multiple templates can be used to export several different data sets simultaneously. The infrastructure vendor community is only now beginning to understand the potential power behind IPFIX data export. sFlow just isn’t flexible enough to accomodate the evolving role of flows in enterprise and service provider networks. And finally, while I respect the effort by the folks at sFlow.org. sFlow doesn’t have the research following that IPFIX can claim. Individuals such as Benoit Claise are driving the standardization of flow exports and consistently adding to the “how can we make flows better” discussion.

NetFlow works well over a WAN, sFlow doesn’t

The dirty little secret about sFlow that everyone likes to ignore is that the amount of sFlow leaving the router is directly proportional to the packets per second rate. This means that the higher the bps rate at a remote site, the higher the sFlow record rate leaving the site destined for the collector. NetFlow on the other hand is based on the number of active connections, not the packet rate. If I transfer a 500MB file from A to B sFlow will create *several thousand* packet samples to represent the transfer while NetFlow will create only two 46 byte NetFlow entries. This is a deal breaker for most customers. It’s entirely possible to saturate a WAN link with sFlow samples. And since sFlow runs over connectionless, uni-directional UDP, there is no way to tell the exporter to slow down.

Don’t get me wrong…

If all you have are sFlow-enabled devices you should still look into turning on sFlow exports. It’ll give you traffic stats and other bits of info that are better than nothing at all. It’s just that when I see comments like this show up I wonder who these people are and where they’ve been. The NetFlow vs. sFlow war is over. Someone should let them know. Perhaps this will help.

Not convinced? You be the judge. Plixer’s Scrutinizer flow collector accepts both NetFlow and sFlow. Download a free trial here and try it out yourself.

Read the original blog entry...

More Stories By Michael Patterson

Michael Patterson, is the founder & CEO of Plixer and the product manager for Scrutinizer NetFlow and sFlow Analyzer. Prior to starting Somix and Plixer, Mike worked in a technical support role at Cabletron Systems, acquired his Novell CNE and then moved to the training department for a few years. While in training he finished his Masters in Computer Information Systems from Southern New Hampshire University and then left technical training to pursue a new skill set in Professional Services. In 1998 he left the 'Tron' to start Somix and Plixer.

Latest Stories
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing bes...
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists loo...
Cloud applications are seeing a deluge of requests to support the exploding advanced analytics market. “Open analytics” is the emerging strategy to deliver that data through an open data access layer, in the cloud, to be directly consumed by external analytics tools and popular programming languages. An increasing number of data engineers and data scientists use a variety of platforms and advanced analytics languages such as SAS, R, Python and Java, as well as frameworks such as Hadoop and Spark...
Automation is enabling enterprises to design, deploy, and manage more complex, hybrid cloud environments. Yet the people who manage these environments must be trained in and understanding these environments better than ever before. A new era of analytics and cognitive computing is adding intelligence, but also more complexity, to these cloud environments. How smart is your cloud? How smart should it be? In this power panel at 20th Cloud Expo, moderated by Conference Chair Roger Strukhoff, paneli...
"Loom is applying artificial intelligence and machine learning into the entire log analysis process, from start to finish and at the end you will get a human touch,” explained Sabo Taylor Diab, Vice President, Marketing at Loom Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, highlighted the current challenges of these transformative technologies and shared strategies for preparing your organization for these changes. This “view from the top” outlined the latest trends and developments i...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
"We are a monitoring company. We work with Salesforce, BBC, and quite a few other big logos. We basically provide monitoring for them, structure for their cloud services and we fit into the DevOps world" explained David Gildeh, Co-founder and CEO of Outlyer, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Join us at Cloud Expo June 6-8 to find out how to securely connect your cloud app to any cloud or on-premises data source – without complex firewall changes. More users are demanding access to on-premises data from their cloud applications. It’s no longer a “nice-to-have” but an important differentiator that drives competitive advantages. It’s the new “must have” in the hybrid era. Users want capabilities that give them a unified view of the data to get closer to customers and grow business. The...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Artificial intelligence, machine learning, neural networks. We’re in the midst of a wave of excitement around AI such as hasn’t been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. Will this time be different? Most likely. Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Furthermore, the funding and research going into AI now comes from a wide range of com...
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devic...