Welcome!

Blog Feed Post

Ten Things I’ve Learned About Cloud Security

This is not a Top 10 list – it is a list of 10 things I’ve learned along the way. Top 10 lists imply some sort of universal knowledge of the “top” things possible in a given field. Top 10 attractive women, top 10 guitar players, top 10 whatever, they all have one thing in common: They are all ten things the author thinks are the best. I don’t really like to think I know everything so this list is in no particular order. This particular list is on cloud security and, well, it is a big topic that interests me greatly and there is no way I can cover it all in a blog post. As a result I will be doing a presentation around this topic in a few places, including BSides Cleveland.

Anyway, cloud security is tough for a lot of reasons, not least of which is because you, like me, probably only understand the basics of what you interface with in the cloud – the controls the cloud provider allows you to see. This lack of depth of management introduces many security related challenges. Having said that, let’s explore:

1) Control Panels
Control panels are simultaneously the best and worst aspect of a given cloud provider’s offerings. They can enable you to do really great things or handicap you by not allowing enough fine-grained control. They can enhance the security of your slice of the cloud infrastructure and then cut it off at the knees, sometimes with both in the same feature. If a control is very granular and allows you to be very custom, you can make spectacular infrastructure decisions while at the same time easily forgetting to make some necessary security adjustments. If the controls aren’t granular enough, i.e. the provider made those decisions for you, then that can limit your abilities. In general, control panels are a double edged sword…and a balancing act…usually done while juggling razor-sharp ninja stars – not necessarily an easy job.

2) Uptime/Downtime
This is a problem, but not necessarily a problem specific to the cloud. It is a problem specific to computers. You will have downtime no matter where you host your services or what you do to prevent it. (Author’s Note: I have spent a large portion of my company’s overall budget to avoid downtime. It still happens, it’s just mitigated better) Some will argue uptime is worse in the cloud than if you hosted it yourself, but depending on who you are this may or may not be true. It just depends on how much trouble you want to go through to deal with the uptime of critical assets – or rather how much you want to spend to achieve a good uptime ratio. In the public cloud, the cost is spread around so it is naturally a bit cheaper. If you are doing it yourself then you are footing the entire cost. Simple equation really: how much downtime can you afford? Be careful here, the cloud is not always cheaper than doing it yourself, check out the Cloud is Cheap section.

Side note: While I was editing this post and getting its accompanying presentation ready Amazon Web Services had their big storm related outage and one of our apps was in the wrong zone at the wrong time, bringing it down for about 30 hours total. Luckily, it was a weekend so no one was using it. But still, there is no greater feeling of helplessness when your service is down and completely out of your control. I’m like this whenever my phone or data center provider have problems too so I’ve gotten used to it. A bottle of pepto and lots of patience is required for any sort of cloud endeavor.

3) Access Control
There is a “myth” that you have no concept of access control in the cloud. In most cases, at least with the reputable providers, you do have a decent ACL system. In Amazon you can set up roles and assign folks to groups, not half bad. The problem comes in when you actually MEAN access control. With very few exceptions you are running on shared resources in the cloud, not dedicated equipment. If you were under the impression it wasn’t shared, perhaps we need to revisit the definitions of cloud computing again (see cheatsheet). In theory, this sharing could cause some problems. All cloud providers use some sort of virtualization – what it is, what vendor, what tech is completely irrelevant – there is at least some risk of someone being able to break out of the virtualized jail and see your data or perform some other malicious activity. This is a very important risk, one to at least mitigate with encryption on both the transport and rest layers. Honestly though you should be doing this in any virtualized environment, it just makes for very good practice. Dare I say, it should be a best practice.

4) API (Good and Evil)
I have a love/hate relationship with APIs (Application Programming Interface). I love them because they can make so many things so easy to do, at least the good ones. I hate them because they can often change without notice (depends on the provider) and they give providers yet another avenue for charging “micro payments”. Micro payments sound good in theory but they do add up. Amazon, for instance, wants you to send email through their messaging API and charge you per-message. I haven’t paid for email per message since…well never. They claim it increases reliability and makes it better than sending directly from your EC2 instance. I find that claim a little suspect but it’s their jail and their rules. Another big issue is if you buy the theory that the cloud is a jail for your apps then APIs are the bars. They can really lock you into a provider. I despise vendor lock-in almost more than anything. There are cloud abstraction layers (such as Delta Cloud) but honestly I’ve never used them and really it is just adding another layer of complexity. Deploying your cloud app is not like dating, it’s more akin to marriage and divorcing it is hard, so remember to do your homework.

Of course there is also the whole security angle of APIs that you have to consider. Is the transport encrypted? Is the data reliable and untainted? Are you sure you are pulling the correct data? These considerations cannot be overlooked, even in a cloud environment where you are encouraged to “trust the system.” Buyer should always beware.

5) Firewalls Are Dead….Well Sorta
Real firewalls in the cloud are a great idea, most reputable providers at least have basic packet filtering available. But wouldn’t it be great to have a full-on firewall up there protecting your data? It is possible! Check Point, Cisco, and probably many others have full firewall instances (some with IPS) available for you to deploy. I think it’s a good idea and all, but I struggle to see how many people will actually use it. I mean, people hate firewalls as it is for some strange reason (I blame willful ignorance). But now not only do you have to pay for the firewall license, but you will have to pay for the CPU time to actually run it. Obviously we’re talking about a public cloud here, if you have your own private cloud already you just need the license. Regardless of where you have your cloud, you should probably have a firewall to give you tighter control.

6) Redundancy
One of the ways the cloud sells itself is on it’s instant super-redundancy and availability. As we’ve learned, even the large cloud providers are susceptible to downtime. As I discussed above in the uptime/downtime section, downtime just happens. The more or less instant redundancy marketing line is somewhat true, you can absolutely load balance your apps across multiple Amazon EC2 instances across multiple availability zones. But this isn’t some magic feature you just get, it costs extra. Don’t be fooled by those sort of marketing tricks.

As I wrote this section I began thinking about the abstraction layers discussed in the API section and started to wonder: is it possible to build an application that was hosted then load balanced across multiple cloud providers. I bet it would be but now brain hurts (and I suspect if I did that my wallet would be hurting too). Anyone doing that out there?

7) Encrypt Early, Encrypt Often
Before Amazon introduced the ability to encrypt in their storage offering (S3) I wrote a tool called logsup that would allow me to automatically rotate (through logrotated), encrypt (through GPG) and upload (to S3) old log files. It takes some metadata and writes it up to Amazon’s SimpleDB service so I can easily search and figure out what data was in the encrypted log files. Of course I thought I was really clever when I wrote it, but then four days later Amazon introduced their encryption feature that has better key management than GPG. Eventually I’ll rewrite logsup to take advantage of that, but until then I will keep stubbornly using it.

There are two primary lessons to take away from my logsup adventure. First, you should always encrypt sensitive data before it leaves your control. Second, you should always write a receipt for that data so you know where it came from and at least abstractly what type of data it contains. This will allow some piece of mind that your data is safe and that you will be able to find it later when you need it most.

Depending on the deployment, encryption also offers some protection against snooping tenants when you’re using cloud storage or other less private storage. It is not a replacement for strong access control or larger security precautions but it can provide a decent layer of protection against basic prying eyes.

8) Cloud Is Cheap!
There are a number of different types of cloud service (see cheatsheet) and the whole “cloud is cheap” myth only holds up for a few of them. Cloud can be very cheap when you’re discussing Software As A Service (SaaS), e.g. Google’s Apps for Business is only around $5 per user per month per year or $50 per user per year. You as an independent person or company cannot run a mail server for any amount of users for less than that cost per user. The hardware alone would set you back more, so it makes very good financial sense to run your email in the cloud. Whether it makes good common sense is a different story, but I think it is becoming more generally accepted as a best practice to outsource your email, even if only for the cost benefit.

The story gets a lot murkier when you move away from software into infrastructure or platforms as services. Depending on your needs and usage this can be way more expensive than running your own stuff or much cheaper, again it just depends on the needs. If you want to build a redundant platform or infrastructure with off the shelf hardware and Linux, prepare to pay for the privilege. It really depends though, I’ve seen analyses where it is cheaper to do it yourself, so as with all advice your mileage may vary.

9) Logs In The Cloud
There is a very persistent myth that you can’t get proper logging for your cloud applications and this is patently untrue. An EC2 instance is just an operating system tweaked a little bit to run on Amazon’s infrastructure. There is nothing magical about it, it is the same as if you were running it on a VMWare cluster and you can get your logs from there just fine right? Right? Of course you can, your application and OS will log the same as if you were hosting it locally. You could even put a log collection server in the cloud if you were so inclined or use something like Loggly or Splunk Storm and have your log analysis up there too.

When you start discussing SaaS or IaaS the story gets a little darker as you are not necessarily buying access to the logs – you are outsourcing it completely so the providers simply do provide that same level of visibility. I guess that is their call, you just need to be prepared. As we discussed in the control panels section the type of visibility you get will depend on how well the control panel is architected. A lot of providers will give you access to logs for your specific instance (if only to cut down on support calls), but others do not. It is simply a matter of asking the right questions and, again, doing your homework.

10) Service Level Agreements (SLA)
When you are choosing a cloud provider be sure you actually read their SLA. This is basically the agreement that spells out your interactions and expectations when dealing with your provider. This is the document that will basically tell you how much uptime to expect (they all say 99.999% uptime, they are almost all deceitful) and more importantly what sort of compensation you will get if they violate their SLA. Expect a lot of lawyer-speak here, so if you are putting something really critical in the cloud have your lawyer read it over. You won’t have a lot of negotiation room usually, but at least you’ll be able to plan for the possible risks with a clear head. Typically an SLA will link out to a document describing security precautions taken by the provider to protect your data. This can be crucially important to have so you can effectively add in tech to fill the gaps, though sometimes these documents tend to be a bit vague.

While this list wasn’t entirely security focused, the intent was to help guide folks looking into cloud deployments for their organizations and how to better prepare for the differences in securing those environments. Hopefully it met those goals and more. Please send any feedback on this list to [email protected].

The post Ten Things I’ve Learned About Cloud Security appeared first on Hurricane Labs.

Read the original blog entry...

More Stories By Hurricane Labs

Christina O’Neill has been working in the information security field for 3 years. She is a board member for the Northern Ohio InfraGard Members Alliance and a committee member for the Information Security Summit, a conference held once a year for information security and physical security professionals.

Latest Stories
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Intel is an American multinational corporation and technology company headquartered in Santa Clara, California, in the Silicon Valley. It is the world's second largest and second highest valued semiconductor chip maker based on revenue after being overtaken by Samsung, and is the inventor of the x86 series of microprocessors, the processors found in most personal computers (PCs). Intel supplies processors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel also manufactu...
Serverless applications increase developer productivity and time to market, by freeing engineers from spending time on infrastructure provisioning, configuration and management. Serverless also simplifies Operations and reduces cost - as the Kubernetes container infrastructure required to run these applications is automatically spun up and scaled precisely with the workload, to optimally handle all runtime requests. Recent advances in open source technology now allow organizations to run Serv...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It's clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Th...
The benefits of automated cloud deployments for speed, reliability and security are undeniable. The cornerstone of this approach, immutable deployment, promotes the idea of continuously rolling safe, stable images instead of trying to keep up with managing a fixed pool of virtual or physical machines. In this talk, we'll explore the immutable infrastructure pattern and how to use continuous deployment and continuous integration (CI/CD) process to build and manage server images for any platform....
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust tha...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. This...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Public clouds dominate IT conversations but the next phase of cloud evolutions are "multi" hybrid cloud environments. The winners in the cloud services industry will be those organizations that understand how to leverage these technologies as complete service solutions for specific customer verticals. In turn, both business and IT actors throughout the enterprise will need to increase their engagement with multi-cloud deployments today while planning a technology strategy that will constitute a ...
GCP Marketplace is based on a multi-cloud and hybrid-first philosophy, focused on giving Google Cloud partners and enterprise customers flexibility without lock-in. It also helps customers innovate by easily adopting new technologies from ISV partners, such as commercial Kubernetes applications, and allows companies to oversee the full lifecycle of a solution, from discovery through management.
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Lo...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...