Click here to close now.




















Welcome!

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

Platform-Based Vulnerabilities and Cloud Computing

How to take out an entire PaaS cloud with one vulnerability

Apache Killer.

Post of Doom.

What do these two vulnerabilities have in common? Right, they’re platform-based vulnerabilities. Meaning they are vulnerabilities peculiar to the web or application server platform upon which applications are deployed. Mitigations for such vulnerabilities generally point to changes in configuration of the platform – limit post size, header value sizes, turn off some value in the associated configuration.

But they also have something else in common – risk. And not just risk in general, but risk to cloud providers whose primary value is in offering not just a virtual server but an entire, pre-integrated and pre-configured application deployment stack. Think LAMP, as an example, and providers like Microsoft (Azure) and VMware (CloudFoundry), more commonly adopting the moniker of PaaS. It’s an operational dream to have a virtual server pre-configured and ready to go with the exact application deployment stack needed and offers a great deal of value in terms of efficiency and overall operational investment, but it is – or should be – a security professional’s nightmare. It’s not unlike the recent recall of Chevy Volts – a defect in the platform needs to be mitigated. The only way to do it, for car owners, is to effectively shut down their ability to drive while a patch is applied. It’s disruptive, it’s expensive (you still have to get to work, after all), and it’s frustrating for the consumer. For the provider, it’s bad PR and negatively impacts the brand. Neither of which is appealing.

A vulnerability in the application stack, in the web or application server, can be operationally devastating to the provider – and potentially disruptive to the consumer whether the vulnerability is exploited or not.

STANDARDIZATION is a DOUBLE-EDGED SWORD

Assume a homogeneous cloud environment offering an application stack based on Microsoft ASP. Assume now an exploit, oh say like Post of Doom, is discovered whose primary mitigation lies in modifying the configuration of each and every instance. Virtualization of any kind provides a solution, of course, but introduces the possibility of disruption in the impact to consumer applications from the configuration change. A primary mitigation for the Post of Doom is to limit the size of data in a POST to under 8MB. Depending on the application, this has to potential to “break” application functionality, particularly those for which uploading big data is a focus. Images, video, documents, etc… These all may be impacted negatively, disrupting applications and angering consumers.

Patching, of course, is preferred, as it eliminates the underlying vulnerability without potentially breaking applications. But patching takes time – time to develop, time to test, time to deploy. The actual delivery of such patches in a PaaS environment is a delicate operation. You can’t just shut the whole cloud down and restart it after the patches are applied to the base images, can you? Do you wait, quiesce the vulnerable images and only force the patched ones when new instances are provisioned? A configuration-based mitigation, too, has these same issues. You can’t just shut down the whole cloud, apply the change, and reboot.

It’s a delicate balance of security versus availability that must struck for the provider, and certainly their position in such cases is one not to be envied. Damned if they do, damned if they don’t.

Then there is the risk of exploitation before any mitigation is applied. If I want to wreak havoc on a PaaS, I may be able to accomplish simply by finding one with the appropriate platform vulnerable to a given exploit, and attack. Cycling through applications deployed in that environment (easily identified at the network layer by the IP ranges assigned to the provider) should result in a wealth of chaos being wrought. The right vulnerability could take out a significant enough portion of the environment to garner attention from the outages caused.

Enterprise organizations that think they are immune from such issues should think again, as even a cloud provider is often not as standardized on a single application platform as an enterprise is, and it is that standardization that is at the root of the potential risk from platform-based vulnerabilities. Standardization, commoditization, these are good things in terms of many financial and operational benefits, but they can also cause operational risk to increase.

MITIGATE in the MIDDLE

There is a better solution, a better strategy, a better operational means of mitigating platform-based risks.

chess-queen-protected

This is where the role of a flexible, broad-spectrum layer of security applies. One that enables security professionals to broadly apply security policies to quickly mitigate potentially disastrous vulnerabilities. Without disrupting a single running instance, an organization can deploy a mitigating solution that detects and prevents the effects of such vulnerabilities. Applying security policies that mitigate such vulnerabilities before they reach the platform is critical to preventing a disaster of epic (and newsworthy) proportions.

Whether stop gap or a permanent solution, by leveraging the application delivery tier of any data center – enterprise or cloud provider – such vulnerabilities can be addressed without imposing harsh penalties on applications and application owners, such as requiring complete shutdown and reboots.

Leveraging such a flexible data center tier insulates the platform from exploitation while insulating customers from the disruption required to mitigate immediately on the platform layer, allowing time to redress through patches or, at least, understand the potential implication to the application from the platform configuration changes required to mitigate the vulnerability.

In today’s data center, time is perhaps the biggest benefit afforded to IT by any solution, and yet the one least likely to be provided. A flexible application delivery tier capable of mitigating threats across the network and application stack without disruption is one of the few solutions available that offers the elusive and very valuable benefit of time. Providers and enterprises alike need to consider their current data center architecture and whether it supports the notion of such a dynamic tier. If not, it’s time to re-evaluate and determine whether a strategic change of direction is necessary to ensure the ability of operations and security teams to address operational risk as quickly and efficiently as possible.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
SYS-CON Events announced today that Micron Technology, Inc., a global leader in advanced semiconductor systems, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Micron’s broad portfolio of high-performance memory technologies – including DRAM, NAND and NOR Flash – is the basis for solid state drives, modules, multichip packages and other system solutions. Backed by more than 35 years of tech...
17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises ar...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
Containers are not new, but renewed commitments to performance, flexibility, and agility have propelled them to the top of the agenda today. By working without the need for virtualization and its overhead, containers are seen as the perfect way to deploy apps and services across multiple clouds. Containers can handle anything from file types to operating systems and services, including microservices. What are microservices? Unlike what the name implies, microservices are not necessarily small,...
Everyone talks about continuous integration and continuous delivery but those are just two ends of the pipeline. In the middle of DevOps is continuous testing (CT), and many organizations are struggling to implement continuous testing effectively. After all, without continuous testing there is no delivery. And Lab-As-A-Service (LaaS) enhances the CT with dynamic on-demand self-serve test topologies. CT together with LAAS make a powerful combination that perfectly serves complex software developm...
ElasticBox, the agile application delivery manager, announced freely available public boxes for the DevOps community. ElasticBox works with enterprises to help them deploy any application to any cloud. Public boxes are curated reference boxes that represent some of the most popular applications and tools for orchestrating deployments at scale. Boxes are an adaptive way to represent reusable infrastructure as components of code. Boxes contain scripts, variables, and metadata to automate proces...
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
Consumer IoT applications provide data about the user that just doesn’t exist in traditional PC or mobile web applications. This rich data, or “context,” enables the highly personalized consumer experiences that characterize many consumer IoT apps. This same data is also providing brands with unprecedented insight into how their connected products are being used, while, at the same time, powering highly targeted engagement and marketing opportunities. In his session at @ThingsExpo, Nathan Trel...
As more intelligent IoT applications shift into gear, they’re merging into the ever-increasing traffic flow of the Internet. It won’t be long before we experience bottlenecks, as IoT traffic peaks during rush hours. Organizations that are unprepared will find themselves by the side of the road unable to cross back into the fast lane. As billions of new devices begin to communicate and exchange data – will your infrastructure be scalable enough to handle this new interconnected world?
While many app developers are comfortable building apps for the smartphone, there is a whole new world out there. In his session at @ThingsExpo, Narayan Sainaney, Co-founder and CTO of Mojio, will discuss how the business case for connected car apps is growing and, with open platform companies having already done the heavy lifting, there really is no barrier to entry.
Mobile, social, Big Data, and cloud have fundamentally changed the way we live. “Anytime, anywhere” access to data and information is no longer a luxury; it’s a requirement, in both our personal and professional lives. For IT organizations, this means pressure has never been greater to deliver meaningful services to the business and customers.
eCube Systems has released NXTmonitor, a full featured application orchestration solution. NXTmonitor, which inherited the code base of NXTminder, has been extended to support multi-discipline processes and will act as a DevOps utility in a heterogeneous enterprise environment. Previously, NXTminder was packaged with NXTera middleware to configure and manage Entera and NXTera RPC servers. “Since we are widening the focus of this solution to DevOps, we felt the need to change the name to NXTmon...
DevOps Summit, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development...
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing thes...
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device acce...