Welcome!

Blog Feed Post

Ten Things I’ve Learned About Cloud Security

This is not a Top 10 list – it is a list of 10 things I’ve learned along the way. Top 10 lists imply some sort of universal knowledge of the “top” things possible in a given field. Top 10 attractive women, top 10 guitar players, top 10 whatever, they all have one thing in common: They are all ten things the author thinks are the best. I don’t really like to think I know everything so this list is in no particular order. This particular list is on cloud security and, well, it is a big topic that interests me greatly and there is no way I can cover it all in a blog post. As a result I will be doing a presentation around this topic in a few places, including BSides Cleveland.

Anyway, cloud security is tough for a lot of reasons, not least of which is because you, like me, probably only understand the basics of what you interface with in the cloud – the controls the cloud provider allows you to see. This lack of depth of management introduces many security related challenges. Having said that, let’s explore:

1) Control Panels
Control panels are simultaneously the best and worst aspect of a given cloud provider’s offerings. They can enable you to do really great things or handicap you by not allowing enough fine-grained control. They can enhance the security of your slice of the cloud infrastructure and then cut it off at the knees, sometimes with both in the same feature. If a control is very granular and allows you to be very custom, you can make spectacular infrastructure decisions while at the same time easily forgetting to make some necessary security adjustments. If the controls aren’t granular enough, i.e. the provider made those decisions for you, then that can limit your abilities. In general, control panels are a double edged sword…and a balancing act…usually done while juggling razor-sharp ninja stars – not necessarily an easy job.

2) Uptime/Downtime
This is a problem, but not necessarily a problem specific to the cloud. It is a problem specific to computers. You will have downtime no matter where you host your services or what you do to prevent it. (Author’s Note: I have spent a large portion of my company’s overall budget to avoid downtime. It still happens, it’s just mitigated better) Some will argue uptime is worse in the cloud than if you hosted it yourself, but depending on who you are this may or may not be true. It just depends on how much trouble you want to go through to deal with the uptime of critical assets – or rather how much you want to spend to achieve a good uptime ratio. In the public cloud, the cost is spread around so it is naturally a bit cheaper. If you are doing it yourself then you are footing the entire cost. Simple equation really: how much downtime can you afford? Be careful here, the cloud is not always cheaper than doing it yourself, check out the Cloud is Cheap section.

Side note: While I was editing this post and getting its accompanying presentation ready Amazon Web Services had their big storm related outage and one of our apps was in the wrong zone at the wrong time, bringing it down for about 30 hours total. Luckily, it was a weekend so no one was using it. But still, there is no greater feeling of helplessness when your service is down and completely out of your control. I’m like this whenever my phone or data center provider have problems too so I’ve gotten used to it. A bottle of pepto and lots of patience is required for any sort of cloud endeavor.

3) Access Control
There is a “myth” that you have no concept of access control in the cloud. In most cases, at least with the reputable providers, you do have a decent ACL system. In Amazon you can set up roles and assign folks to groups, not half bad. The problem comes in when you actually MEAN access control. With very few exceptions you are running on shared resources in the cloud, not dedicated equipment. If you were under the impression it wasn’t shared, perhaps we need to revisit the definitions of cloud computing again (see cheatsheet). In theory, this sharing could cause some problems. All cloud providers use some sort of virtualization – what it is, what vendor, what tech is completely irrelevant – there is at least some risk of someone being able to break out of the virtualized jail and see your data or perform some other malicious activity. This is a very important risk, one to at least mitigate with encryption on both the transport and rest layers. Honestly though you should be doing this in any virtualized environment, it just makes for very good practice. Dare I say, it should be a best practice.

4) API (Good and Evil)
I have a love/hate relationship with APIs (Application Programming Interface). I love them because they can make so many things so easy to do, at least the good ones. I hate them because they can often change without notice (depends on the provider) and they give providers yet another avenue for charging “micro payments”. Micro payments sound good in theory but they do add up. Amazon, for instance, wants you to send email through their messaging API and charge you per-message. I haven’t paid for email per message since…well never. They claim it increases reliability and makes it better than sending directly from your EC2 instance. I find that claim a little suspect but it’s their jail and their rules. Another big issue is if you buy the theory that the cloud is a jail for your apps then APIs are the bars. They can really lock you into a provider. I despise vendor lock-in almost more than anything. There are cloud abstraction layers (such as Delta Cloud) but honestly I’ve never used them and really it is just adding another layer of complexity. Deploying your cloud app is not like dating, it’s more akin to marriage and divorcing it is hard, so remember to do your homework.

Of course there is also the whole security angle of APIs that you have to consider. Is the transport encrypted? Is the data reliable and untainted? Are you sure you are pulling the correct data? These considerations cannot be overlooked, even in a cloud environment where you are encouraged to “trust the system.” Buyer should always beware.

5) Firewalls Are Dead….Well Sorta
Real firewalls in the cloud are a great idea, most reputable providers at least have basic packet filtering available. But wouldn’t it be great to have a full-on firewall up there protecting your data? It is possible! Check Point, Cisco, and probably many others have full firewall instances (some with IPS) available for you to deploy. I think it’s a good idea and all, but I struggle to see how many people will actually use it. I mean, people hate firewalls as it is for some strange reason (I blame willful ignorance). But now not only do you have to pay for the firewall license, but you will have to pay for the CPU time to actually run it. Obviously we’re talking about a public cloud here, if you have your own private cloud already you just need the license. Regardless of where you have your cloud, you should probably have a firewall to give you tighter control.

6) Redundancy
One of the ways the cloud sells itself is on it’s instant super-redundancy and availability. As we’ve learned, even the large cloud providers are susceptible to downtime. As I discussed above in the uptime/downtime section, downtime just happens. The more or less instant redundancy marketing line is somewhat true, you can absolutely load balance your apps across multiple Amazon EC2 instances across multiple availability zones. But this isn’t some magic feature you just get, it costs extra. Don’t be fooled by those sort of marketing tricks.

As I wrote this section I began thinking about the abstraction layers discussed in the API section and started to wonder: is it possible to build an application that was hosted then load balanced across multiple cloud providers. I bet it would be but now brain hurts (and I suspect if I did that my wallet would be hurting too). Anyone doing that out there?

7) Encrypt Early, Encrypt Often
Before Amazon introduced the ability to encrypt in their storage offering (S3) I wrote a tool called logsup that would allow me to automatically rotate (through logrotated), encrypt (through GPG) and upload (to S3) old log files. It takes some metadata and writes it up to Amazon’s SimpleDB service so I can easily search and figure out what data was in the encrypted log files. Of course I thought I was really clever when I wrote it, but then four days later Amazon introduced their encryption feature that has better key management than GPG. Eventually I’ll rewrite logsup to take advantage of that, but until then I will keep stubbornly using it.

There are two primary lessons to take away from my logsup adventure. First, you should always encrypt sensitive data before it leaves your control. Second, you should always write a receipt for that data so you know where it came from and at least abstractly what type of data it contains. This will allow some piece of mind that your data is safe and that you will be able to find it later when you need it most.

Depending on the deployment, encryption also offers some protection against snooping tenants when you’re using cloud storage or other less private storage. It is not a replacement for strong access control or larger security precautions but it can provide a decent layer of protection against basic prying eyes.

8) Cloud Is Cheap!
There are a number of different types of cloud service (see cheatsheet) and the whole “cloud is cheap” myth only holds up for a few of them. Cloud can be very cheap when you’re discussing Software As A Service (SaaS), e.g. Google’s Apps for Business is only around $5 per user per month per year or $50 per user per year. You as an independent person or company cannot run a mail server for any amount of users for less than that cost per user. The hardware alone would set you back more, so it makes very good financial sense to run your email in the cloud. Whether it makes good common sense is a different story, but I think it is becoming more generally accepted as a best practice to outsource your email, even if only for the cost benefit.

The story gets a lot murkier when you move away from software into infrastructure or platforms as services. Depending on your needs and usage this can be way more expensive than running your own stuff or much cheaper, again it just depends on the needs. If you want to build a redundant platform or infrastructure with off the shelf hardware and Linux, prepare to pay for the privilege. It really depends though, I’ve seen analyses where it is cheaper to do it yourself, so as with all advice your mileage may vary.

9) Logs In The Cloud
There is a very persistent myth that you can’t get proper logging for your cloud applications and this is patently untrue. An EC2 instance is just an operating system tweaked a little bit to run on Amazon’s infrastructure. There is nothing magical about it, it is the same as if you were running it on a VMWare cluster and you can get your logs from there just fine right? Right? Of course you can, your application and OS will log the same as if you were hosting it locally. You could even put a log collection server in the cloud if you were so inclined or use something like Loggly or Splunk Storm and have your log analysis up there too.

When you start discussing SaaS or IaaS the story gets a little darker as you are not necessarily buying access to the logs – you are outsourcing it completely so the providers simply do provide that same level of visibility. I guess that is their call, you just need to be prepared. As we discussed in the control panels section the type of visibility you get will depend on how well the control panel is architected. A lot of providers will give you access to logs for your specific instance (if only to cut down on support calls), but others do not. It is simply a matter of asking the right questions and, again, doing your homework.

10) Service Level Agreements (SLA)
When you are choosing a cloud provider be sure you actually read their SLA. This is basically the agreement that spells out your interactions and expectations when dealing with your provider. This is the document that will basically tell you how much uptime to expect (they all say 99.999% uptime, they are almost all deceitful) and more importantly what sort of compensation you will get if they violate their SLA. Expect a lot of lawyer-speak here, so if you are putting something really critical in the cloud have your lawyer read it over. You won’t have a lot of negotiation room usually, but at least you’ll be able to plan for the possible risks with a clear head. Typically an SLA will link out to a document describing security precautions taken by the provider to protect your data. This can be crucially important to have so you can effectively add in tech to fill the gaps, though sometimes these documents tend to be a bit vague.

While this list wasn’t entirely security focused, the intent was to help guide folks looking into cloud deployments for their organizations and how to better prepare for the differences in securing those environments. Hopefully it met those goals and more. Please send any feedback on this list to [email protected].

The post Ten Things I’ve Learned About Cloud Security appeared first on Hurricane Labs.

Read the original blog entry...

More Stories By Hurricane Labs

Christina O’Neill has been working in the information security field for 3 years. She is a board member for the Northern Ohio InfraGard Members Alliance and a committee member for the Information Security Summit, a conference held once a year for information security and physical security professionals.

Latest Stories
Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes. We are offering early bird savings...
Crosscode Panoptics Automated Enterprise Architecture Software. Application Discovery and Dependency Mapping. Automatically generate a powerful enterprise-wide map of your organization's IT assets down to the code level. Enterprise Impact Assessment. Automatically analyze the impact, to every asset in the enterprise down to the code level. Automated IT Governance Software. Create rules and alerts based on code level insights, including security issues, to automate governance. Enterpr...
Your job is mostly boring. Many of the IT operations tasks you perform on a day-to-day basis are repetitive and dull. Utilizing automation can improve your work life, automating away the drudgery and embracing the passion for technology that got you started in the first place. In this presentation, I'll talk about what automation is, and how to approach implementing it in the context of IT Operations. Ned will discuss keys to success in the long term and include practical real-world examples. Ge...
The benefits of automated cloud deployments for speed, reliability and security are undeniable. The cornerstone of this approach, immutable deployment, promotes the idea of continuously rolling safe, stable images instead of trying to keep up with managing a fixed pool of virtual or physical machines. In this talk, we'll explore the immutable infrastructure pattern and how to use continuous deployment and continuous integration (CI/CD) process to build and manage server images for any platfo...
DevOpsSUMMIT at CloudEXPO, to be held June 25-26, 2019 at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...
Hackers took three days to identify and exploit a known vulnerability in Equifax’s web applications. I will share new data that reveals why three days (at most) is the new normal for DevSecOps teams to move new business /security requirements from design into production. This session aims to enlighten DevOps teams, security and development professionals by sharing results from the 4th annual State of the Software Supply Chain Report -- a blend of public and proprietary data with expert researc...
Automation is turning manual or repetitive IT tasks into a thing of the past-including in the datacenter. Nutanix not only provides a world-class user interface, but also a comprehensive set of APIs to allow the automation of provisioning, data collection, and other tasks. In this session, you'll explore Nutanix APIs-from provisioning to other Day 0, Day 1 operations. Come learn about how you can easily leverage Nutanix APIs for orchestration and automation of infrastructure, VMs, networking, an...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
In today's always-on world, customer expectations have changed. Competitive differentiation is delivered through rapid software innovations, the ability to respond to issues quickly and by releasing high-quality code with minimal interruptions. DevOps isn't some far off goal; it's methodologies and practices are a response to this demand. The demand to go faster. The demand for more uptime. The demand to innovate. In this keynote, we will cover the Nutanix Developer Stack. Built from the foundat...
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
It cannot be overseen or regulated by any one administrator, like a government or bank. Currently, there is no government regulation on them which also means there is no government safeguards over them. Although many are looking at Bitcoin to put money into, it would be wise to proceed with caution. Regular central banks are watching it and deciding whether or not to make them illegal (Criminalize them) and therefore make them worthless and eliminate them as competition. ICOs (Initial Coin Offer...
We are in a digital age however when one looks for their dream home, the mortgage process can take as long as 60 days to complete. Not what we expect in a time where processes are known to take place swiftly and seamlessly. Mortgages businesses are facing the heat and are in immediate need of upgrading their operating model to reduce costs, decrease the processing time and enhance the customer experience. Therefore, providers are exploring multiple ways of tapping emerging technologies to solve ...
This sixteen (16) hour course provides an introduction to DevOps, the cultural and professional movement that stresses communication, collaboration, integration and automation in order to improve the flow of work between software developers and IT operations professionals. Improved workflows will result in an improved ability to design, develop, deploy and operate software and services faster.
This is going to be a live demo on a production ready CICD pipeline which automate the deployment of application onto AWS ECS and Fargate. The same pipeline will automate deployment into various environment such as Test, UAT, and Prod. The pipeline will go through various stages such as source, build, test, approval, UAT stage, Prod stage. The demo will utilize only AWS services including AWS CodeCommit, Codebuild, code pipeline, Elastic container service (ECS), ECR, and Fargate.