Welcome!

Blog Feed Post

What Ops Needs to Know about APIs and Compression

I’ve been reading up on APIs cause, coolness. And in particular I really enjoyed reading Best Practices for Designing a Pragmatic RESTful API because it had a lot of really good information and advice.

And then I got to the part about compressing your APIs.

Before we go too far let me first say I’m not saying you shouldn’t compress your API or app responses. You probably should. What I am saying is that where you compress data and when are important considerations.

That’s because generally speaking no one has put their web server (which is ultimately what tends to serve up responses, whether they’re APIs or objects, XML or JSON) at the edge of the Internet. You know, where it’s completely vulnerable. It’s usually several devices back in the networking gauntlet that has be run before data gets from the edge of your network to the server.

wrong architectureThis is because there are myriad bad actors out salivating at the prospect of a return to an early aughts data center architecture in which firewalls, DDoS protection, and other app security services were not physically and logically located upstream from the apps they protect today.

Cause if you don’t have to navigate the network, it’s way easier to launch an attack on an app.

Today, we employ an average of 11 different services in the network, upstream from the app, to provide security, scale, and performance-enhancing services. Like compression. better architecture 

Now, you can enable compression on the web server. It’s a standard thing in HTTP and it’s little more than a bit to flip in the configuration. Easy peasy performance-enhancing change, right?

Except that today that’s not always true.

The primary reason compression improves performance is because when it reduces the size of data it reduces the number of packets that must be transmitted. That reduces the potential for congestion that causes a Catch-22 where TCP retransmits increase congestion that increases packet loss that increases… well, you get the picture. This is particularly true when mobile clients are connecting via cellular networks, because latency is a real issue for them and the more round trips it takes, the worse the application experience.

Suffice to say that the primary reason compression improves performance is that it reduces the amount of data needing to be transmitted which means “faster” delivery to the client. Fewer packets = less time = happier users.

That’s a good thing. Except when compression gets in the way or doesn’t provide any real reduction that would improve performance.

What? How can that be, you ask.

Remember that we’re looking for compression to reduce the number of packets transmitted, especially when it has to traverse a higher latency, lower capacity link between the data center and the client.

It turns out that sometimes compression doesn’t really help with that.

Consider the aforementioned article and its section on compressing. The author ran some tests, and concluded that compression of text-based data produces some really awesome results:

Let's look at this with a real world example. I've pulled some data from GitHub's API, which uses pretty print by default. I'll also be doing some gzip comparisons:


$ curl https://api.github.com/users/veesahni > with-whitespace.txt
$ ruby -r json -e 'puts JSON JSON.parse(STDIN.read)' < with-whitespace.txt > without-whitespace.txt
$ gzip -c with-whitespace.txt > with-whitespace.txt.gz
$ gzip -c without-whitespace.txt > without-whitespace.txt.gz

The output files have the following sizes:

  • without-whitespace.txt - 1252 bytes
  • with-whitespace.txt - 1369 bytes
  • without-whitespace.txt.gz - 496 bytes
  • with-whitespace.txt.gz - 509 bytes

In this example, the whitespace increased the output size by 8.5% when gzip is not in play and 2.6% when gzip is in play. On the other hand, the act of gzipping in itself provided over 60% in bandwidth savings. Since the cost of pretty printing is relatively small, it's best to pretty print by default and ensure gzip compression is supported!

To further hammer in this point, Twitter found that there was an 80% savings (in some cases)when enabling gzip compression on their Streaming API. Stack Exchange went as far as to never return a response that's not compressed!

 

Wow! I mean, from a purely mathematical perspective, that’s some awesome results. And the author is correct in saying it will provide bandwidth savings.

What those results won’t necessarily do is improve performance because the original size of the file was already less than the MSS for a single packet. Which means compressed or not, that data takes exactly one packet to transmit. That’s it. I won’t bore you with the mathematics, but the speed of light and networking says one packet takes the same amount of time to transit whether it’s got 496 bytes of payload or 1396 bytes of payload. The typical MSS for Ethernet packets is 1460 bytes, which means compressing something smaller than that effectively nets you nothing in terms of performance. It’s like a plane. It takes as long to fly from point A to point B whether there are 14 passengers or 140. Fuel efficiency (bandwidth) is impacted, but that doesn’t really change performance, just the cost.

Furthermore, compressing the payload at the web server means that web app security services upstream have to decompress if they want to do their job, which is to say scan responses for sensitive or excessive data indicative of a breach of security policies. This is a big deal, kids. 42% of respondents in our annual State of Application Delivery survey always scan responses as part of their overall attack-surfaces-soad-2016security strategy to prevent data leaks. Which means they have to spend extra time to decompress the data to evaluate it and then recompress it, or perhaps they can’t inspect it at all.

Now, that said, bandwidth savings are a good thing. It’s part of any comprehensive scaling strategy to consider the impact of increasing use of an app on bandwidth. And a clogged up network can impact performance negatively so compression is a good idea. But not necessarily at the web server. This is akin to carefully considering where you enforce SSL/TLS security measures, as there are similar impacts on security services upstream from the app / web server.

That’s why the right place for compression and SSL/TLS is generally upstream, in the network, after security has checked out the response and it’s actually ready to be delivered to the client. That’s usually the load balancing service or the ADC, where compression can not only be applied most efficiently and without interfering with security services and offsetting the potential gains by forcing extra processing upstream.

As with rate limiting APIs, it’s not always a matter of whether or not you should, it’s a matter of where you should.

Architecture, not algorithms, are the key to scale and performance of modern applications. 

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
Cloud applications are seeing a deluge of requests to support the exploding advanced analytics market. “Open analytics” is the emerging strategy to deliver that data through an open data access layer, in the cloud, to be directly consumed by external analytics tools and popular programming languages. An increasing number of data engineers and data scientists use a variety of platforms and advanced analytics languages such as SAS, R, Python and Java, as well as frameworks such as Hadoop and Spark...
"MobiDev is a Ukraine-based software development company. We do mobile development, and we're specialists in that. But we do full stack software development for entrepreneurs, for emerging companies, and for enterprise ventures," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, highlighted the current challenges of these transformative technologies and shared strategies for preparing your organization for these changes. This “view from the top” outlined the latest trends and developments i...
Automation is enabling enterprises to design, deploy, and manage more complex, hybrid cloud environments. Yet the people who manage these environments must be trained in and understanding these environments better than ever before. A new era of analytics and cognitive computing is adding intelligence, but also more complexity, to these cloud environments. How smart is your cloud? How smart should it be? In this power panel at 20th Cloud Expo, moderated by Conference Chair Roger Strukhoff, paneli...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
SYS-CON Events announced today that TMC has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo and Big Data at Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Global buyers rely on TMC’s content-driven marketplaces to make purchase decisions and navigate markets. Learn how we can help you reach your marketing goals.
Managing mission-critical SAP systems and landscapes has never been easy. Add public cloud with its myriad of powerful cloud native services and this may not change any time soon. Public cloud offers exciting new possibilities for enterprise workloads. But to make use of these possibilities and capabilities, IT teams need to re-think everything they have done before. Otherwise, they will just end up using public cloud as a hosting platform for their workloads, aka known as “lift and shift.”
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing bes...
SYS-CON Events announced today that TechTarget has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets.
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
SYS-CON Events announced today that Ayehu will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on October 31 - November 2, 2017 at the Santa Clara Convention Center in Santa Clara California. Ayehu provides IT Process Automation & Orchestration solutions for IT and Security professionals to identify and resolve critical incidents and enable rapid containment, eradication, and recovery from cyber security breaches. Ayehu provides customers greater control over IT infras...
SYS-CON Events announced today that Silicon India has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Published in Silicon Valley, Silicon India magazine is the premiere platform for CIOs to discuss their innovative enterprise solutions and allows IT vendors to learn about new solutions that can help grow their business.
Artificial intelligence, machine learning, neural networks. We’re in the midst of a wave of excitement around AI such as hasn’t been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. Will this time be different? Most likely. Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Furthermore, the funding and research going into AI now comes from a wide range of com...