Welcome!

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

Platform-Based Vulnerabilities and Cloud Computing

How to take out an entire PaaS cloud with one vulnerability

Apache Killer.

Post of Doom.

What do these two vulnerabilities have in common? Right, they’re platform-based vulnerabilities. Meaning they are vulnerabilities peculiar to the web or application server platform upon which applications are deployed. Mitigations for such vulnerabilities generally point to changes in configuration of the platform – limit post size, header value sizes, turn off some value in the associated configuration.

But they also have something else in common – risk. And not just risk in general, but risk to cloud providers whose primary value is in offering not just a virtual server but an entire, pre-integrated and pre-configured application deployment stack. Think LAMP, as an example, and providers like Microsoft (Azure) and VMware (CloudFoundry), more commonly adopting the moniker of PaaS. It’s an operational dream to have a virtual server pre-configured and ready to go with the exact application deployment stack needed and offers a great deal of value in terms of efficiency and overall operational investment, but it is – or should be – a security professional’s nightmare. It’s not unlike the recent recall of Chevy Volts – a defect in the platform needs to be mitigated. The only way to do it, for car owners, is to effectively shut down their ability to drive while a patch is applied. It’s disruptive, it’s expensive (you still have to get to work, after all), and it’s frustrating for the consumer. For the provider, it’s bad PR and negatively impacts the brand. Neither of which is appealing.

A vulnerability in the application stack, in the web or application server, can be operationally devastating to the provider – and potentially disruptive to the consumer whether the vulnerability is exploited or not.

STANDARDIZATION is a DOUBLE-EDGED SWORD

Assume a homogeneous cloud environment offering an application stack based on Microsoft ASP. Assume now an exploit, oh say like Post of Doom, is discovered whose primary mitigation lies in modifying the configuration of each and every instance. Virtualization of any kind provides a solution, of course, but introduces the possibility of disruption in the impact to consumer applications from the configuration change. A primary mitigation for the Post of Doom is to limit the size of data in a POST to under 8MB. Depending on the application, this has to potential to “break” application functionality, particularly those for which uploading big data is a focus. Images, video, documents, etc… These all may be impacted negatively, disrupting applications and angering consumers.

Patching, of course, is preferred, as it eliminates the underlying vulnerability without potentially breaking applications. But patching takes time – time to develop, time to test, time to deploy. The actual delivery of such patches in a PaaS environment is a delicate operation. You can’t just shut the whole cloud down and restart it after the patches are applied to the base images, can you? Do you wait, quiesce the vulnerable images and only force the patched ones when new instances are provisioned? A configuration-based mitigation, too, has these same issues. You can’t just shut down the whole cloud, apply the change, and reboot.

It’s a delicate balance of security versus availability that must struck for the provider, and certainly their position in such cases is one not to be envied. Damned if they do, damned if they don’t.

Then there is the risk of exploitation before any mitigation is applied. If I want to wreak havoc on a PaaS, I may be able to accomplish simply by finding one with the appropriate platform vulnerable to a given exploit, and attack. Cycling through applications deployed in that environment (easily identified at the network layer by the IP ranges assigned to the provider) should result in a wealth of chaos being wrought. The right vulnerability could take out a significant enough portion of the environment to garner attention from the outages caused.

Enterprise organizations that think they are immune from such issues should think again, as even a cloud provider is often not as standardized on a single application platform as an enterprise is, and it is that standardization that is at the root of the potential risk from platform-based vulnerabilities. Standardization, commoditization, these are good things in terms of many financial and operational benefits, but they can also cause operational risk to increase.

MITIGATE in the MIDDLE

There is a better solution, a better strategy, a better operational means of mitigating platform-based risks.

chess-queen-protected

This is where the role of a flexible, broad-spectrum layer of security applies. One that enables security professionals to broadly apply security policies to quickly mitigate potentially disastrous vulnerabilities. Without disrupting a single running instance, an organization can deploy a mitigating solution that detects and prevents the effects of such vulnerabilities. Applying security policies that mitigate such vulnerabilities before they reach the platform is critical to preventing a disaster of epic (and newsworthy) proportions.

Whether stop gap or a permanent solution, by leveraging the application delivery tier of any data center – enterprise or cloud provider – such vulnerabilities can be addressed without imposing harsh penalties on applications and application owners, such as requiring complete shutdown and reboots.

Leveraging such a flexible data center tier insulates the platform from exploitation while insulating customers from the disruption required to mitigate immediately on the platform layer, allowing time to redress through patches or, at least, understand the potential implication to the application from the platform configuration changes required to mitigate the vulnerability.

In today’s data center, time is perhaps the biggest benefit afforded to IT by any solution, and yet the one least likely to be provided. A flexible application delivery tier capable of mitigating threats across the network and application stack without disruption is one of the few solutions available that offers the elusive and very valuable benefit of time. Providers and enterprises alike need to consider their current data center architecture and whether it supports the notion of such a dynamic tier. If not, it’s time to re-evaluate and determine whether a strategic change of direction is necessary to ensure the ability of operations and security teams to address operational risk as quickly and efficiently as possible.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
trust and privacy in their ecosystem. Assurance and protection of device identity, secure data encryption and authentication are the key security challenges organizations are trying to address when integrating IoT devices. This holds true for IoT applications in a wide range of industries, for example, healthcare, consumer devices, and manufacturing. In his session at @ThingsExpo, Lancen LaChance, vice president of product management, IoT solutions at GlobalSign, will teach IoT developers how t...
A critical component of any IoT project is the back-end systems that capture data from remote IoT devices and structure it in a way to answer useful questions. Traditional data warehouse and analytical systems are mature technologies that can be used to handle large data sets, but they are not well suited to many IoT-scale products and the need for real-time insights. At Fuze, we have developed a backend platform as part of our mobility-oriented cloud service that uses Big Data-based approache...
We're entering the post-smartphone era, where wearable gadgets from watches and fitness bands to glasses and health aids will power the next technological revolution. With mass adoption of wearable devices comes a new data ecosystem that must be protected. Wearables open new pathways that facilitate the tracking, sharing and storing of consumers’ personal health, location and daily activity data. Consumers have some idea of the data these devices capture, but most don’t realize how revealing and...
Digital payments using wearable devices such as smart watches, fitness trackers, and payment wristbands are an increasing area of focus for industry participants, and consumer acceptance from early trials and deployments has encouraged some of the biggest names in technology and banking to continue their push to drive growth in this nascent market. Wearable payment systems may utilize near field communication (NFC), radio frequency identification (RFID), or quick response (QR) codes and barcodes...
The IETF draft standard for M2M certificates is a security solution specifically designed for the demanding needs of IoT/M2M applications. In his session at @ThingsExpo, Brian Romansky, VP of Strategic Technology at TrustPoint Innovation, will explain how M2M certificates can efficiently enable confidentiality, integrity, and authenticity on highly constrained devices.
In his session at 18th Cloud Expo, Sagi Brody, Chief Technology Officer at Webair Internet Development Inc., will focus on real world deployments of DDoS mitigation strategies in every layer of the network. He will give an overview of methods to prevent these attacks and best practices on how to provide protection in complex cloud platforms. He will also outline what we have found in our experience managing and running thousands of Linux and Unix managed service platforms and what specifically c...
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
Artificial Intelligence has the potential to massively disrupt IoT. In his session at 18th Cloud Expo, AJ Abdallat, CEO of Beyond AI, will discuss what the five main drivers are in Artificial Intelligence that could shape the future of the Internet of Things. AJ Abdallat is CEO of Beyond AI. He has over 20 years of management experience in the fields of artificial intelligence, sensors, instruments, devices and software for telecommunications, life sciences, environmental monitoring, process...
There is an ever-growing explosion of new devices that are connected to the Internet using “cloud” solutions. This rapid growth is creating a massive new demand for efficient access to data. And it’s not just about connecting to that data anymore. This new demand is bringing new issues and challenges and it is important for companies to scale for the coming growth. And with that scaling comes the need for greater security, gathering and data analysis, storage, connectivity and, of course, the...
The IoTs will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm and share the must-have mindsets for removing complexity from the development proc...
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, will show how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningfu...
Many private cloud projects were built to deliver self-service access to development and test resources. While those clouds delivered faster access to resources, they lacked visibility, control and security needed for production deployments. In their session at 18th Cloud Expo, Steve Anderson, Product Manager at BMC Software, and Rick Lefort, Principal Technical Marketing Consultant at BMC Software, will discuss how a cloud designed for production operations not only helps accelerate developer...
Manufacturers are embracing the Industrial Internet the same way consumers are leveraging Fitbits – to improve overall health and wellness. Both can provide consistent measurement, visibility, and suggest performance improvements customized to help reach goals. Fitbit users can view real-time data and make adjustments to increase their activity. In his session at @ThingsExpo, Mark Bernardo Professional Services Leader, Americas, at GE Digital, will discuss how leveraging the Industrial Interne...
Redis is not only the fastest database, but it has become the most popular among the new wave of applications running in containers. Redis speeds up just about every data interaction between your users or operational systems. In his session at 18th Cloud Expo, Dave Nielsen, Developer Relations at Redis Labs, will shares the functions and data structures used to solve everyday use cases that are driving Redis' popularity.
Increasing IoT connectivity is forcing enterprises to find elegant solutions to organize and visualize all incoming data from these connected devices with re-configurable dashboard widgets to effectively allow rapid decision-making for everything from immediate actions in tactical situations to strategic analysis and reporting. In his session at 18th Cloud Expo, Shikhir Singh, Senior Developer Relations Manager at Sencha, will discuss how to create HTML5 dashboards that interact with IoT devic...