Welcome!

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

Platform-Based Vulnerabilities and Cloud Computing

How to take out an entire PaaS cloud with one vulnerability

Apache Killer.

Post of Doom.

What do these two vulnerabilities have in common? Right, they’re platform-based vulnerabilities. Meaning they are vulnerabilities peculiar to the web or application server platform upon which applications are deployed. Mitigations for such vulnerabilities generally point to changes in configuration of the platform – limit post size, header value sizes, turn off some value in the associated configuration.

But they also have something else in common – risk. And not just risk in general, but risk to cloud providers whose primary value is in offering not just a virtual server but an entire, pre-integrated and pre-configured application deployment stack. Think LAMP, as an example, and providers like Microsoft (Azure) and VMware (CloudFoundry), more commonly adopting the moniker of PaaS. It’s an operational dream to have a virtual server pre-configured and ready to go with the exact application deployment stack needed and offers a great deal of value in terms of efficiency and overall operational investment, but it is – or should be – a security professional’s nightmare. It’s not unlike the recent recall of Chevy Volts – a defect in the platform needs to be mitigated. The only way to do it, for car owners, is to effectively shut down their ability to drive while a patch is applied. It’s disruptive, it’s expensive (you still have to get to work, after all), and it’s frustrating for the consumer. For the provider, it’s bad PR and negatively impacts the brand. Neither of which is appealing.

A vulnerability in the application stack, in the web or application server, can be operationally devastating to the provider – and potentially disruptive to the consumer whether the vulnerability is exploited or not.

STANDARDIZATION is a DOUBLE-EDGED SWORD

Assume a homogeneous cloud environment offering an application stack based on Microsoft ASP. Assume now an exploit, oh say like Post of Doom, is discovered whose primary mitigation lies in modifying the configuration of each and every instance. Virtualization of any kind provides a solution, of course, but introduces the possibility of disruption in the impact to consumer applications from the configuration change. A primary mitigation for the Post of Doom is to limit the size of data in a POST to under 8MB. Depending on the application, this has to potential to “break” application functionality, particularly those for which uploading big data is a focus. Images, video, documents, etc… These all may be impacted negatively, disrupting applications and angering consumers.

Patching, of course, is preferred, as it eliminates the underlying vulnerability without potentially breaking applications. But patching takes time – time to develop, time to test, time to deploy. The actual delivery of such patches in a PaaS environment is a delicate operation. You can’t just shut the whole cloud down and restart it after the patches are applied to the base images, can you? Do you wait, quiesce the vulnerable images and only force the patched ones when new instances are provisioned? A configuration-based mitigation, too, has these same issues. You can’t just shut down the whole cloud, apply the change, and reboot.

It’s a delicate balance of security versus availability that must struck for the provider, and certainly their position in such cases is one not to be envied. Damned if they do, damned if they don’t.

Then there is the risk of exploitation before any mitigation is applied. If I want to wreak havoc on a PaaS, I may be able to accomplish simply by finding one with the appropriate platform vulnerable to a given exploit, and attack. Cycling through applications deployed in that environment (easily identified at the network layer by the IP ranges assigned to the provider) should result in a wealth of chaos being wrought. The right vulnerability could take out a significant enough portion of the environment to garner attention from the outages caused.

Enterprise organizations that think they are immune from such issues should think again, as even a cloud provider is often not as standardized on a single application platform as an enterprise is, and it is that standardization that is at the root of the potential risk from platform-based vulnerabilities. Standardization, commoditization, these are good things in terms of many financial and operational benefits, but they can also cause operational risk to increase.

MITIGATE in the MIDDLE

There is a better solution, a better strategy, a better operational means of mitigating platform-based risks.

chess-queen-protected

This is where the role of a flexible, broad-spectrum layer of security applies. One that enables security professionals to broadly apply security policies to quickly mitigate potentially disastrous vulnerabilities. Without disrupting a single running instance, an organization can deploy a mitigating solution that detects and prevents the effects of such vulnerabilities. Applying security policies that mitigate such vulnerabilities before they reach the platform is critical to preventing a disaster of epic (and newsworthy) proportions.

Whether stop gap or a permanent solution, by leveraging the application delivery tier of any data center – enterprise or cloud provider – such vulnerabilities can be addressed without imposing harsh penalties on applications and application owners, such as requiring complete shutdown and reboots.

Leveraging such a flexible data center tier insulates the platform from exploitation while insulating customers from the disruption required to mitigate immediately on the platform layer, allowing time to redress through patches or, at least, understand the potential implication to the application from the platform configuration changes required to mitigate the vulnerability.

In today’s data center, time is perhaps the biggest benefit afforded to IT by any solution, and yet the one least likely to be provided. A flexible application delivery tier capable of mitigating threats across the network and application stack without disruption is one of the few solutions available that offers the elusive and very valuable benefit of time. Providers and enterprises alike need to consider their current data center architecture and whether it supports the notion of such a dynamic tier. If not, it’s time to re-evaluate and determine whether a strategic change of direction is necessary to ensure the ability of operations and security teams to address operational risk as quickly and efficiently as possible.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
Kubernetes, Docker and containers are changing the world, and how companies are deploying their software and running their infrastructure. With the shift in how applications are built and deployed, new challenges must be solved. In his session at @DevOpsSummit at19th Cloud Expo, Sebastian Scheele, co-founder of Loodse, will discuss the implications of containerized applications/infrastructures and their impact on the enterprise. In a real world example based on Kubernetes, he will show how to ...
Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abil...
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is...
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is expected in the amount of information being processed, managed, analyzed, and acted upon by enterprise IT. This amazing is not part of some distant future - it is happening today. One report shows a 650% increase in enterprise data by 2020. Other estimates are even higher....
I wanted to gather all of my Internet of Things (IOT) blogs into a single blog (that I could later use with my University of San Francisco (USF) Big Data “MBA” course). However as I started to pull these blogs together, I realized that my IOT discussion lacked a vision; it lacked an end point towards which an organization could drive their IOT envisioning, proof of value, app dev, data engineering and data science efforts. And I think that the IOT end point is really quite simple…
Aspose.Total for .NET is the most complete package of all file format APIs for .NET as offered by Aspose. It empowers developers to create, edit, render, print and convert between a wide range of popular document formats within any .NET, C#, ASP.NET and VB.NET applications. Aspose compiles all .NET APIs on a daily basis to ensure that it contains the most up to date versions of each of Aspose .NET APIs. If a new .NET API or a new version of existing APIs is released during the subscription peri...
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Identity is in everything and customers are looking to their providers to ensure the security of their identities, transactions and data. With the increased reliance on cloud-based services, service providers must build security and trust into their offerings, adding value to customers and improving the user experience. Making identity, security and privacy easy for customers provides a unique advantage over the competition.
Qosmos has announced new milestones in the detection of encrypted traffic and in protocol signature coverage. Qosmos latest software can accurately classify traffic encrypted with SSL/TLS (e.g., Google, Facebook, WhatsApp), P2P traffic (e.g., BitTorrent, MuTorrent, Vuze), and Skype, while preserving the privacy of communication content. These new classification techniques mean that traffic optimization, policy enforcement, and user experience are largely unaffected by encryption. In respect wit...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
Is the ongoing quest for agility in the data center forcing you to evaluate how to be a part of infrastructure automation efforts? As organizations evolve toward bimodal IT operations, they are embracing new service delivery models and leveraging virtualization to increase infrastructure agility. Therefore, the network must evolve in parallel to become equally agile. Read this essential piece of Gartner research for recommendations on achieving greater agility.