Welcome!

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

Platform-Based Vulnerabilities and Cloud Computing

How to take out an entire PaaS cloud with one vulnerability

Apache Killer.

Post of Doom.

What do these two vulnerabilities have in common? Right, they’re platform-based vulnerabilities. Meaning they are vulnerabilities peculiar to the web or application server platform upon which applications are deployed. Mitigations for such vulnerabilities generally point to changes in configuration of the platform – limit post size, header value sizes, turn off some value in the associated configuration.

But they also have something else in common – risk. And not just risk in general, but risk to cloud providers whose primary value is in offering not just a virtual server but an entire, pre-integrated and pre-configured application deployment stack. Think LAMP, as an example, and providers like Microsoft (Azure) and VMware (CloudFoundry), more commonly adopting the moniker of PaaS. It’s an operational dream to have a virtual server pre-configured and ready to go with the exact application deployment stack needed and offers a great deal of value in terms of efficiency and overall operational investment, but it is – or should be – a security professional’s nightmare. It’s not unlike the recent recall of Chevy Volts – a defect in the platform needs to be mitigated. The only way to do it, for car owners, is to effectively shut down their ability to drive while a patch is applied. It’s disruptive, it’s expensive (you still have to get to work, after all), and it’s frustrating for the consumer. For the provider, it’s bad PR and negatively impacts the brand. Neither of which is appealing.

A vulnerability in the application stack, in the web or application server, can be operationally devastating to the provider – and potentially disruptive to the consumer whether the vulnerability is exploited or not.

STANDARDIZATION is a DOUBLE-EDGED SWORD

Assume a homogeneous cloud environment offering an application stack based on Microsoft ASP. Assume now an exploit, oh say like Post of Doom, is discovered whose primary mitigation lies in modifying the configuration of each and every instance. Virtualization of any kind provides a solution, of course, but introduces the possibility of disruption in the impact to consumer applications from the configuration change. A primary mitigation for the Post of Doom is to limit the size of data in a POST to under 8MB. Depending on the application, this has to potential to “break” application functionality, particularly those for which uploading big data is a focus. Images, video, documents, etc… These all may be impacted negatively, disrupting applications and angering consumers.

Patching, of course, is preferred, as it eliminates the underlying vulnerability without potentially breaking applications. But patching takes time – time to develop, time to test, time to deploy. The actual delivery of such patches in a PaaS environment is a delicate operation. You can’t just shut the whole cloud down and restart it after the patches are applied to the base images, can you? Do you wait, quiesce the vulnerable images and only force the patched ones when new instances are provisioned? A configuration-based mitigation, too, has these same issues. You can’t just shut down the whole cloud, apply the change, and reboot.

It’s a delicate balance of security versus availability that must struck for the provider, and certainly their position in such cases is one not to be envied. Damned if they do, damned if they don’t.

Then there is the risk of exploitation before any mitigation is applied. If I want to wreak havoc on a PaaS, I may be able to accomplish simply by finding one with the appropriate platform vulnerable to a given exploit, and attack. Cycling through applications deployed in that environment (easily identified at the network layer by the IP ranges assigned to the provider) should result in a wealth of chaos being wrought. The right vulnerability could take out a significant enough portion of the environment to garner attention from the outages caused.

Enterprise organizations that think they are immune from such issues should think again, as even a cloud provider is often not as standardized on a single application platform as an enterprise is, and it is that standardization that is at the root of the potential risk from platform-based vulnerabilities. Standardization, commoditization, these are good things in terms of many financial and operational benefits, but they can also cause operational risk to increase.

MITIGATE in the MIDDLE

There is a better solution, a better strategy, a better operational means of mitigating platform-based risks.

chess-queen-protected

This is where the role of a flexible, broad-spectrum layer of security applies. One that enables security professionals to broadly apply security policies to quickly mitigate potentially disastrous vulnerabilities. Without disrupting a single running instance, an organization can deploy a mitigating solution that detects and prevents the effects of such vulnerabilities. Applying security policies that mitigate such vulnerabilities before they reach the platform is critical to preventing a disaster of epic (and newsworthy) proportions.

Whether stop gap or a permanent solution, by leveraging the application delivery tier of any data center – enterprise or cloud provider – such vulnerabilities can be addressed without imposing harsh penalties on applications and application owners, such as requiring complete shutdown and reboots.

Leveraging such a flexible data center tier insulates the platform from exploitation while insulating customers from the disruption required to mitigate immediately on the platform layer, allowing time to redress through patches or, at least, understand the potential implication to the application from the platform configuration changes required to mitigate the vulnerability.

In today’s data center, time is perhaps the biggest benefit afforded to IT by any solution, and yet the one least likely to be provided. A flexible application delivery tier capable of mitigating threats across the network and application stack without disruption is one of the few solutions available that offers the elusive and very valuable benefit of time. Providers and enterprises alike need to consider their current data center architecture and whether it supports the notion of such a dynamic tier. If not, it’s time to re-evaluate and determine whether a strategic change of direction is necessary to ensure the ability of operations and security teams to address operational risk as quickly and efficiently as possible.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON's 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value S...
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
Join IBM November 2 at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how to go beyond multi-speed it to bring agility to traditional enterprise applications. Technology innovation is the driving force behind modern business and enterprises must respond by increasing the speed and efficiency of software delivery. The challenge is that existing enterprise applications are expensive to develop and difficult to modernize. This often results in what Gartner calls ...
Translating agile methodology into real-world best practices within the modern software factory has driven widespread DevOps adoption, yet much work remains to expand workflows and tooling across the enterprise. As models evolve from pockets of experimentation into wholescale organizational reinvention, practitioners find themselves challenged to incorporate the culture and architecture necessary to support DevOps at scale. In his session at @DevOpsSummit at 20th Cloud Expo, Anand Akela, Senior...
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Analytic. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
The age of Digital Disruption is evolving into the next era – Digital Cohesion, an age in which applications securely self-assemble and deliver predictive services that continuously adapt to user behavior. Information from devices, sensors and applications around us will drive services seamlessly across mobile and fixed devices/infrastructure. This evolution is happening now in software defined services and secure networking. Four key drivers – Performance, Economics, Interoperability and Trust ...
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
SYS-CON Events announced today that Twistlock, the leading provider of cloud container security solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Twistlock is the industry's first enterprise security suite for container security. Twistlock's technology addresses risks on the host and within the application of the container, enabling enterprises to consistently enforce security policies, monitor...
Multiple data types are pouring into IoT deployments. Data is coming in small packages as well as enormous files and data streams of many sizes. Widespread use of mobile devices adds to the total. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will look at the tools and environments that are being put to use in IoT deployments, as well as the team skills a modern enterprise IT shop needs to keep things running, get a handle on all this data, and deli...
Automation is enabling enterprises to design, deploy, and manage more complex, hybrid cloud environments. Yet the people who manage these environments must be trained in and understanding these environments better than ever before. A new era of analytics and cognitive computing is adding intelligence, but also more complexity, to these cloud environments. How smart is your cloud? How smart should it be? In this power panel at 20th Cloud Expo, moderated by Conference Chair Roger Strukhoff, pane...
With billions of sensors deployed worldwide, the amount of machine-generated data will soon exceed what our networks can handle. But consumers and businesses will expect seamless experiences and real-time responsiveness. What does this mean for IoT devices and the infrastructure that supports them? More of the data will need to be handled at - or closer to - the devices themselves.
Building a cross-cloud operational model can be a daunting task. Per-cloud silos are not the answer, but neither is a fully generic abstraction plane that strips out capabilities unique to a particular provider. In his session at 20th Cloud Expo, Chris Wolf, VP & Chief Technology Officer, Global Field & Industry at VMware, will discuss how successful organizations approach cloud operations and management, with insights into where operations should be centralized and when it’s best to decentraliz...
In recent years, containers have taken the world by storm. Companies of all sizes and industries have realized the massive benefits of containers, such as unprecedented mobility, higher hardware utilization, and increased flexibility and agility; however, many containers today are non-persistent. Containers without persistence miss out on many benefits, and in many cases simply pass the responsibility of persistence onto other infrastructure, adding additional complexity.