Welcome!

Blog Feed Post

Are users to blame for Cloud Insecurity?

In an insightful post, Paul Miller asserts that "Cloud providers’ own systems will tend to be more secure than those that the majority of potential customers have in-house today". You may ask - So why is cloud security seen as a problem then? The problem, as he points out, is the customers who place insecure applications into those pristine, secure Cloud hosting environments:

The customers who open up all the ports you so carefully closed by default. The customers who use ‘password’ as their password. The customers who deploy sloppy code that’s riddled with holes. The customers who, frankly, are just human… and who don’t live and breathe security in the same way that at least someone inside the data centre probably does.
http://cloudofdata.com/2009/08/security-and-the-cloud-will-focus-shift-to-the-customer/

So who is to blame if a customer deploys an insecure application to the Cloud? Can you just blame the customer? It is not as easy as that...

Twelve years ago, I was working for an ISP and one of my jobs was to vet Perl and ASP scripts for security holes. If the ISP let a customer host an insecure script, who was to blame? Or does it even matter who to blame, after the damage is done? If the script could tie up resources on the system and degrade the performance of other customers' applications, then the ISP could not turn around and say "don't blame us, blame that other customer over there with their insecure script". In reality, we would block insecure applications before they were hosted.

The same went for SQL Server applications. Everyone seemed to use "sa" and a blank password. Does this mean SQL Server was inherently insecure? Arguably, yes (well, it was certainly not "secure by default" in those days). But in reality, it was up to the hosting provider to mitigate against the insecurity of applications and the insecurity of code. We could not just say "blame Microsoft".


Multi-tenancy

Although Paul Miller's blog post does not use the word "multi-tenancy", it is one of the keys here. Back in my ISP hosting days, we liked to think that we'd perfected "multi-tenancy", whereby the scripts of one customer could not interfere with the scripts of other customers. However, we still vetted the scripts for security holes.

The Cloud providers seem to take different tacks on this question.

I've noticed that Amazon will examine traffic to and from EC2 instances, and if they see evidence that a machine has been taken over by a bot or trojan, they then block traffic to it. This is like a "quarantine" approach - allow the customer to create an insecure EC2 image, but then detect its behavior and stop it from interfering with other EC2 images.

SalesForce take a different tack. They force developers to proactively add testing code to their applications written in the Apex language on the Force.com platform. Here is the section on Multi-Tenancy in the Force.com "Introduction to Apex":
Multi-tenancy

The Force.com platform is a multi-tenant platform, which means that the resources used by your application (such as the database) are shared with many other applications. This multi-tenancy has a lot of benefits, and it comes with a small promise on your part. In particular, if you write Apex code, the platform needs to do its best to ensure that it is well behaved.

For example, Apex code that simply loops will not benefit you (or the cloud). This is the reason why Apex, when deployed to a production server, needs 75% code coverage in tests.

The Apex runtime engine also enforces a number of limits to ensure that runaway Apex does not monopolize shared resources. These limits, or governors, track and enforce various metrics. An example of these limits include (at the time of writing): the total stack depth for an Apex invocation in a trigger is 16, the total number of characters in a single String may not exceed 100000, and the total number of records processed as a result of DML statements in a block may not exceed 10000.

There are various precautions that can be taken to ensure that the limits are not exceeded. For example, the batched for loop described earlier lifts certain limits, and different limits apply depending on what originated the execution of the Apex. For example, Apex originating from a trigger is typically more limited than Apex that started running as part of a web services call.

http://wiki.developerforce.com/index.php/An_Introduction_to_Apex
So SalesForce are effectively saying "We have a secure environment, and if you want to upload code to run on it, you have to make sure your code has built-in checks to ensure it does not inadvertently monopolize resources". Amazon seem to be taking the alternative approach of discovering if an app is misbehaving, and then quarantining it (of course, as IaaS, EC2 is quite different from Force.com which is PaaS, so their approach is different). It remains to be seen what approach is best. Customers can hedge their bets by using a Cloud Gateway to secure their data in the cloud, keeping it encrypted and signed, so that it is safe even if other apps in the multi-tenant environment misbehave.

Read the original blog entry...

More Stories By Mark O'Neill

Mark O'Neill is VP Innovation at Axway - API and Identity. Previously he was CTO and co-founder at Vordel, which was acquired by Axway. A regular speaker at industry conferences and a contributor to SOA World Magazine and Cloud Computing Journal, Mark holds a degree in mathematics and psychology from Trinity College Dublin and graduate qualifications in neural network programming from Oxford University.

Latest Stories
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
Organizations do not need a Big Data strategy; they need a business strategy that incorporates Big Data. Most organizations lack a road map for using Big Data to optimize key business processes, deliver a differentiated customer experience, or uncover new business opportunities. They do not understand what’s possible with respect to integrating Big Data into the business model.
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they b...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous a...
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
Data scientists must access high-performance computing resources across a wide-area network. To achieve cloud-based HPC visualization, researchers must transfer datasets and visualization results efficiently. HPC clusters now compute GPU-accelerated visualization in the cloud cluster. To efficiently display results remotely, a high-performance, low-latency protocol transfers the display from the cluster to a remote desktop. Further, tools to easily mount remote datasets and efficiently transfer...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, will provide a fun and simple way to introduce Machine Leaning to anyone and everyone. Together we will solve a machine learning problem and find an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intellige...
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their ass...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...