Welcome!

Blog Feed Post

What is 25 Gigabit Ethernet and why would you want it?

In the past few weeks you may have seen several press releases and articles talking about 25 Gbit Ethernet. Just when you got used to ethernet speeds being a nice decimal based system where we simply add zeros every few years, someone threw in 40GbE a few years ago. And that’s ok, powers of two we can deal with, but 25? That just does not fit in our mental model of Ethernet.

SerDes

The driving force behind 25GbE Ethernet is actually fairly simple and straightforward. If you open up an ethernet switch (small, large, does not really matter), you will find that all the high speed components are connected using serial links called SerDes, the rather boring concatenation of SERializer and DESerializer. The serializer is a piece of logic that takes data to be transferred and serializes it, the deserializer sits on the receiving side and reconstructs the serial stream of bits back into data for the ultimate receiver. Between the two, there are some basic encoding mechanisms to keep their clocks synchronized, some basic framing and a few other things. Google for 64B/66B encoding if you really want to understand the gory details.

Gigabit and 10Gigabit ethernet runs over these SerDes connections between components. In a typical 10GbE Top of Rack like switch, the Ethernet switching chip (everyone has heard of Trident2 as the market leading chipset in use today), the actual ethernet ports are SerDes connections coming from the chip (128 of them for Trident2, each representing a 10GbE equivalent port). These connections are then used to connect to other Ethernet of fabric chips (in the case of chassis based systems), or directly to the cages SFP+ and QSFP optics plug into. Communication between an SFP+ in the front of the switch and the switching chip runs on top of one of these SerDes connections.

As you probably figured out, the components used in today’s switches all run SerDes with a clock rate around 12.5Ghz, providing that 10Gbit transfer rate between the components across each (allowing for the encoding overhead). Until recently, that speed was about the state of the art to run these serial links across short distances (this is all inside of a single device) within acceptable signal loss and cross talk ranges. Signal integrity is not one of my strong points, so that’s about the best explanation I will give you.

10 to 40 to 100

With that 10Gbit building block we have created higher speed interfaces. When you look at a 40GbE interface, it is constructed out of 4 parallel SerDes links between the Ethernet chip and the QSFP pluggable. Even when leaving the QSFP onto fiber, it takes 4 parallel 10Gbit streams to transport this to the receiving QSFP. The short reach QSFP interfaces use 4 pair of fiber between them, and their copper Direct Attach Cable (DAC) equivalent carry the same on several copper cables inside the big cable. Longer reach QSFP interfaces put the 4 10Gbit streams onto separate Wave Division Multiplexing (WDM) waves which can be carried over a single pair of fiber. This is part of the reason why QSFP optics are fairly expensive still, especially for longer distances. Distribution of the bits across these parallel paths is done on almost a bit by bit basis by the hardware and has nothing to do with the packet based distribution we know in Ethernet.

Similarly for the 100GbE interfaces that are available today, these are really constructed out of 10 parallel paths of 10Gbit streams. Similar to the 40GbE example above, these are carried across 10 pair of fiber, or multiplexed together into a single fiber. Of course that also comes at a cost.

In the past 2-3 years, technology has advanced to the point that 25Ghz SerDes have become economically viable, and all of the usual physics problems in signal integrity have found solutions. This now means that  we can push data 2.5 times as fast across those serial links, and ethernet chipsets due in the next year will start to have 25Ghz SerDes ports on them rather than 12.5Ghz ports. Once you have these ports, you can of course still run 10GbE across it, but you would not use all the capacity of that connection. 40GbE will then have the option to run across 2 parallel 25Ghz SerDes, rather than the 4 required today. And that translates into less cabling between devices. Similarly, 100GbE will move away from the current 10×10 implementations rather quickly to 4×25, for the very same reasons. Less parallel paths, less fibers, less optics, less everything.

Which then leaves the question, if there is this basic 25Ghz building block that we intend to use for 40GbE and 100GbE, why would we not want to use it in and by itself for 25GbE. As a single signal, it would provide a 2.5 performance boost in an SFP+ form factor without doing anything complicated. It’s like taking 10GbE and simply run it faster, one the hardest part has been solved, running a serial signal that fast.

And then there is 25

And there is your long winded but rather straightforward reason for 25Gbit Ethernet. Independent of ethernet, serial I/O technology has created an extremely useful building block that runs much faster than its predecessor. IEEE in its standardization of 100GbE already assumed 25Ghz serial I/O capabilities and has layered its definition of 100GbE on top it (the 10×10 available today is mostly a placeholder, make sure you ask your vendor what flavor of 100GbE they provide). But that same IEEE never went back to re-apply that 25Ghz technology to 10GbE and 40GbE and turn it into 25GbE and 50GbE. With lots of the foundational work done as part of the 100GbE specifications, this is not the tremendous 4-5 year effort that most IEEE standards efforts take.

The vendor industry has taken it on themselves to move this along outside of IEEE with a 25GbE consortium. There are several parts and components required to create complete 25GbE ethernet solutions. The ethernet chips will start to have them within a year, we then also need pluggable optics and perhaps even Direct Attach Cables to support the native 25GbE and its 50GbE sister, and of course server NIC cards need to support this as well. This is one of these efforts that requires a relatively small development across all of these components (emphasis on relatively) with a fairly quick 2.5x performance payoff at the end. As a consumer, 25GbE and 50GbE provide will provide you with a substantial performance boost in your datacenter server and storage environment with less cabling at a cost that in my opinion will get to small premiums over 10GbE fairly quickly over the next few years.

At Plexxi we fully support the 25GbE efforts, there is very little if anything negative associated with the push to productization. We will quickly embrace ethernet chipsets that support 25Ghz SerDes and the optical components that help us drive our optical fabric to higher capacities. The IEEE has always been the one and only standardization body for anything Ethernet, but it has been sent a clear message by the industry to move a lot faster. I have no doubt that that same industry will drive 25GbE to commercial success because it just makes sense.

[Today's fun fact: Over 50 percent of your body heat is lost through your head and neck. That is a very useful fact for us here in the Northeast.]

The post What is 25 Gigabit Ethernet and why would you want it? appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

Latest Stories
Technology vendors and analysts are eager to paint a rosy picture of how wonderful IoT is and why your deployment will be great with the use of their products and services. While it is easy to showcase successful IoT solutions, identifying IoT systems that missed the mark or failed can often provide more in the way of key lessons learned. In his session at @ThingsExpo, Peter Vanderminden, Principal Industry Analyst for IoT & Digital Supply Chain to Flatiron Strategies, will focus on how IoT depl...
Data is an unusual currency; it is not restricted by the same transactional limitations as money or people. In fact, the more that you leverage your data across multiple business use cases, the more valuable it becomes to the organization. And the same can be said about the organization’s analytics. In his session at 19th Cloud Expo, Bill Schmarzo, CTO for the Big Data Practice at Dell EMC, introduced a methodology for capturing, enriching and sharing data (and analytics) across the organization...
With all the incredible momentum behind the Internet of Things (IoT) industry, it is easy to forget that not a single CEO wakes up and wonders if “my IoT is broken.” What they wonder is if they are making the right decisions to do all they can to increase revenue, decrease costs, and improve customer experience – effectively the same challenges they have always had in growing their business. The exciting thing about the IoT industry is now these decisions can be better, faster, and smarter. Now ...
WebRTC is about the data channel as much as about video and audio conferencing. However, basically all commercial WebRTC applications have been built with a focus on audio and video. The handling of “data” has been limited to text chat and file download – all other data sharing seems to end with screensharing. What is holding back a more intensive use of peer-to-peer data? In her session at @ThingsExpo, Dr Silvia Pfeiffer, WebRTC Applications Team Lead at National ICT Australia, looked at differ...
"Splunk basically takes machine data and we make it usable, valuable and accessible for everyone. The way that plays in DevOps is - we need to make data-driven decisions to delivering applications," explained Andi Mann, Chief Technology Advocate at Splunk and @DevOpsSummit Conference Chair, in this SYS-CON.tv interview at @DevOpsSummit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market? In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, explored the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences between wh...
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
"Logz.io is a log analytics platform. We offer the ELK stack, which is the most common log analytics platform in the world. We offer it as a cloud service," explained Tomer Levy, co-founder and CEO of Logz.io, in this SYS-CON.tv interview at DevOps Summit, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at @DevOpsSummit at 19th Cloud Expo, Robert Doyle, lead architect at eCube Systems, will examine the issues and need for an agile infrastructure and show the advantages of capturing developer knowledge in an exportable file for migration into production. He will introduce the use of NXTmonitor, a next-generation DevOps tool that captures application environments, dependencies and start/stop procedures in a portable configuration file with an easy-to-use GUI. In addition to captur...
"ReadyTalk is an audio and web video conferencing provider. We've really come to embrace WebRTC as the platform for our future of technology," explained Dan Cunningham, CTO of ReadyTalk, in this SYS-CON.tv interview at WebRTC Summit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Due of the rise of Hadoop, many enterprises are now deploying their first small clusters of 10 to 20 servers. At this small scale, the complexity of operating the cluster looks and feels like general data center servers. It is not until the clusters scale, as they inevitably do, when the pain caused by the exponential complexity becomes apparent. We've seen this problem occur time and time again. In his session at Big Data Expo, Greg Bruno, Vice President of Engineering and co-founder of StackIQ...
One of the hottest areas in cloud right now is DRaaS and related offerings. In his session at 16th Cloud Expo, Dale Levesque, Disaster Recovery Product Manager with Windstream's Cloud and Data Center Marketing team, will discuss the benefits of the cloud model, which far outweigh the traditional approach, and how enterprises need to ensure that their needs are properly being met.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of D...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
The many IoT deployments around the world are busy integrating smart devices and sensors into their enterprise IT infrastructures. Yet all of this technology – and there are an amazing number of choices – is of no use without the software to gather, communicate, and analyze the new data flows. Without software, there is no IT. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, Dave McCarthy, Director of Products at Bsquare Corporation; Alan Williamson, Principal ...