|By Lori MacVittie||
|March 6, 2014 03:15 PM EST||
The original RFC for TCP (793) was written in September of 1981. Let's pause for a moment and reflect on that date.
When TCP was introduced applications on "smart" devices was relegated to science fiction and the use of the Internet by your average consumer was still more than two decades away.
Yet TCP remains, like IP, uncontested as "the" transport protocol of ... everything.
New application architectures, networks, and devices all introduce challenges with respect to how TCP behaves. Over the years it's become necessary to tweak the protocol with new algorithms designed to address everything from congestion control to window size control to key management for the TCP MD5 signature option to header compression. There are, in fact, over 100 (that's where I stopped counting) TCP-related RFCs on the digital books.
What most of these RFCs related to TCP have in common is they attempt to address some issue that's holding application performance hostage. Congestion, for example, is a huge network issue that can impact TCP performance (and subsequently the performance of applications) in a very negative way. Congestion can cause lost packets and, for TCP at least, that often means retransmission. You see, TCP is very particular about the order in which is receives packets and, is also designed to be reliable. That means if a packet is lost, you're going to hear about it (or more accurately your infrastructure will hear about it and need to resend it). All those retransmitted packets, trying to traverse an already congested network... well, you can imagine it isn't exactly a good thing.
So there are a variety of congestion control algorithms designed to better manage TCP in such situations. From TCP Reno to Vegas to Illinois to H-TCP, algorithms are the most common way in which network stacks in general deal with congestion.
The important thing to remember about this is that performance trickles up. Improvements in the TCP stack benefit those layers that reside above, like the application.
F5 Synthesis: Faster Platforms Means Faster Apps
The F5 platforms that comprise the Synthesis High Performance Service Fabric are no different. They implement the vast majority of the congestion control algorithms available to ensure the fastest TCP stack we can offer. In the latest release of our platforms, we also added an F5 created algorithm - TCP Woodside - designed to use both loss and latency-based algorithms to improve, in particular, the performance of applications operating over mobile networks.
By implementing at the TCP - the platform - layer, all applications receive the benefit of improved performance but it's particularly noticeable in mobile environments because of the differences inherent in mobile versus fixed networks.
Also new in our latest platforms is support for MPTCP, another potential boon for mobile application users. MPTCP is designed to improve performance by enabling the use of multiple TCP subflows over a single TCP connection. Messages can then be dynamically routed across those subflows. For web applications, this can result in significantly faster retrieval of the more than 90 objects that comprise the average web page today.
Synthesis 1.5 comprises a variety of new performance and security-related features that improve the platforms that make up its High Performance Services Fabric. What that ultimately means for customers and users alike is faster apps.
For more information on Synthesis:
Oct. 24, 2016 08:15 AM EDT Reads: 3,139
Oct. 24, 2016 08:00 AM EDT Reads: 830
Oct. 24, 2016 07:30 AM EDT Reads: 2,543
Oct. 24, 2016 07:30 AM EDT Reads: 16,428
Oct. 24, 2016 07:15 AM EDT Reads: 915
Oct. 24, 2016 05:45 AM EDT Reads: 11,374
Oct. 24, 2016 05:15 AM EDT Reads: 2,516
Oct. 24, 2016 05:00 AM EDT Reads: 5,534
Oct. 24, 2016 05:00 AM EDT Reads: 2,499
Oct. 24, 2016 05:00 AM EDT Reads: 860
Oct. 24, 2016 05:00 AM EDT Reads: 3,095
Oct. 24, 2016 04:45 AM EDT Reads: 3,327
Oct. 24, 2016 04:30 AM EDT Reads: 1,318
Oct. 24, 2016 04:30 AM EDT Reads: 2,504
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...
Oct. 24, 2016 04:00 AM EDT Reads: 1,484