Welcome!

Press Release

Dirty Disks Raise New Questions About Cloud Security

Rackspace resolves issues but other cloud providers still pose risks for customers

Research by Context Information Security has identified potentially significant flaws in the implementation of Cloud infrastructure services offered by some providers, which could be putting their clients' data at risk. By exploiting the vulnerability, which revolves around data separation, Context consultants were able to gain access to some data left on other service users' ‘dirty disks' (1), including fragments of customer databases and elements of system information that could, in combination with other data, allow an attacker to take control of other hosted servers.

Context tested four providers and found that two of them, VPS.NET and Rackspace, were not always securely separating virtual servers or nodes through shared hard disk and network resources. In line with Context's responsible disclosure procedures both providers were immediately informed of the findings. Rackspace worked closely with Context to identify and fix the potential vulnerability, which was found among some users of its now-legacy platform for Linux Cloud Servers. Rackspace reports that it knows of no instance in which any customer's data was seen or exploited in any way by any unauthorized party. Context has tested Rackspace's current cloud platform as well as its new Next Generation Cloud computing solution based on OpenStack, and has been able to confirm that the security vulnerability has been resolved. But other providers might be vulnerable if they use popular hypervisor software, and implement it in the way that Rackspace did before its recent remediation efforts.

VPS.NET told Context that it rolled out a patch to resolve the security issue, but provided no details. VPS.NET is based on OnApp technology that is also used by over 250 other cloud providers. OnApp told Context that it now allows customers to opt-in to having their data removed securely, leaving thousands of virtual machines at potential risk. OnApp added that it has not taken measures to clean up remnant data left by providers or customers, on the grounds that not many customers are affected.

During the course of Context's research, it became clear that if virtual machines are not sufficiently isolated or a mistake is made somewhere in the provisioning or de-provisioning process, then leakage of data might occur between servers. Context has today published more details on the Dirty Disks vulnerability in a blog at: /http://www.contextis.com/research/blog/dirtydisks/

(1) In line with Context's responsible disclosure procedures and confidentiality obligations, Context limited access to making a determination that the data was available. Context did not disclose, use, record, transmit or store any of the data it accessed.

"In the cloud, instead of facing an infrastructure based on separate physical boxes, an attacker can purchase a node from the same provider and attempt an attack on the target organisation from the same physical machine and using the same physical resources" said Michael Jordon, Research and Development Manager at Context ."This does not mean that the Cloud is unsafe and the business benefits remain compelling, but the simplicity of this issue raises important questions about the maturity of Cloud technology and the level of security and testing undertaken in some instances."

The vulnerability itself is due to the way in which some providers automatically provision new virtual servers, initialise operating systems and allocate new storage space. For performance reasons or due to errors, security measures to provide separation between different nodes on a multi-user platform sometimes are not implemented, making it possible to read areas of other virtual disks and so gain access to data which exists on the physical storage provider.

While the data accessed by Context was not live, the most recent data identified was less than a week old. This is most likely due to the virtual storage system moving disk images around the cloud to improve performance or disk usage, leaving old data in the original location which is subsequently reallocated but not zeroed to prevent the space being used.

Since being alerted to the issue by Context last year, Rackspace has undertaken considerable efforts to ensure that any data deleted from their physical disk is zeroed to prevent new servers seeing other users and have taken measures to clean up all existing virtual disks, on what is now their legacy cloud servers platform.

"It is unclear how widespread this issue is among other Cloud providers" said Context's Michael Jordon. "By raising awareness of the problem, other service providers of Cloud Infrastructure services can ensure they do not put their customers data at risk in the same manner and customers can undertake the appropriate due diligence before moving to the Cloud."

More Stories By NeonDrum News

NeonDrum is a targeted online news release distribution and monitoring service for PR professionals. Our mission is simple: to boost online coverage of your B2B news, help you to punch above your weight in the online channel, and get you seen, heard - and found - by the people that matter.

Latest Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will d...
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Le...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Dhiraj Sehgal works in Delphix's product and solution organization. His focus has been DevOps, DataOps, private cloud and datacenters customers, technologies and products. He has wealth of experience in cloud focused and virtualized technologies ranging from compute, networking to storage. He has spoken at Cloud Expo for last 3 years now in New York and Santa Clara.
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.