Related Topics: @CloudExpo, Java IoT, Microservices Expo, Linux Containers, Open Source Cloud, Ruby-On-Rails

@CloudExpo: Article

Top Six Ruby on Rails Deployment Methods in AWS: Pros & Cons

I’ll examine various deployment choices in detail, walk through a thorough analysis and then provide recommendations

Setting up a deployment process on the cloud means a variety of choices. Most likely you're prepared to make some tradeoffs. But getting a view across these potential tradeoffs can be difficult. Here are six popular deployments and advice for making the best choice for your organization's needs.

Let's assume you want a deployment for a small startup with fewer than 20 developers, each needing to host a web app that's gaining traction and for which rapid growth is expected. Its requirements are as follows:

  • Autoscaling support to handle expected surges in demand
  • Maximizing developer efficiency by automating tedious tasks and improving dev flow
  • Encouraging mature processes for building a stable foundation as the codebase grows
  • Maintaining flexibility and agility to handle hotfixes of a relatively immature codebase
  • Counting on a few sources to fail, because any of them can cause deployment failure - imagine GitHub failing or a required plugin becoming unavailable

Narrowing the focus a bit more, let's assume the codebase is using Ruby on Rails, as is often the case. We'll examine various deployment choices in detail, walk through a thorough analysis and then provide recommendations for anyone that fits our sample client profile.

1. The Plain Vanilla AMI Method
Amazon OpsWorks: This proven deployment is a well-tested Amazon OpsWorks Standard recommendation. Each time a new node comes up fresh, it requires running all Chef recipes. To automate this process, Cloud-init is used to run scripts for handling code and environment updates that occur when running nodes.

Pros: This approach requires no AMI management. The process is straightforward, self-documenting and brings up a clean environment every time. Updates and patches are applied very quickly.

Cons: Bringing up new instances is extremely slow, there are many moving parts, and there's a high risk of failure.

Bottom Line: While this is a clean solution, the frequent-failure rate and amount of time needed for bringup makes the Plain Vanilla AMI impractical for a use case with autoscaling.

2. The Bake-Everything AMI Method
This deployment option is proven to work at Amazon Video and Netflix. It runs all Chef recipes once, fetches the codebase and then bakes and uses the AMI. Each change requires a new AMI and an ASG replacement within the ELB, including code and environment changes.

Keep in mind that the environment and configuration management parts of the deployment still need automation using tools like Chef and Puppet. Lack of automation can otherwise make AMI management a nightmare, as one tends to lose track of how the environment actually looks within the AMI.

Pros: Provides the fastest bringup, requires no installation, and includes the fewest moving parts, so error rates are very low.

Cons: Each code deployment requires baking a new AMI. This requires a lot of effort to ensure that the process is as fast as possible in order to avoid developer bottlenecks. This setup also makes it harder to deploy hotfixes.

Bottom Line: This is generally a best practice, but requires a certain level of codebase maturity and a high level of infrastructure sophistication. For example, Netflix has spent a lot of time speeding up the process of baking AMIs by using their Aminator project.

3. A Hybrid Method Using Chef to Handle Complete Deployment
This method strikes a balance between the Plain Vanilla AMI and the Bake-Everything AMI. An AMI is baked using Chef for configuration and environment, but one can't check the codebase or deploy the app. Chef does those once the node is brought up.

Pros: Since all packages are pre-installed, this method is significantly faster than using a Plain Vanilla AMI. Also, since the code is pulled once a node is commissioned, the ability to provide hotfixes is improved.

Cons: Because we're relying on Chef in production, there's a dependency on the repository, and pulling from the repository may fail.

Bottom Line: We consider this to be a medium-risk implementation due to its reliance on Chef.

4. A Hybrid Method Using Capistrano to Handle Code Deployment
This is similar to the hybrid Chef deployment approach, but with code deployed through Capistrano. Capistrano is a mature platform for deploying Rails code that includes several features and fail-safe mechanisms that make it better than Chef. In particular, if pull from the repository fails, Capistrano deploys an older revision from its backups.

Pros: The same as for the Chef hybrid, except that Capistrano is more mature than Chef, especially in handling repository failures.

Cons: It requires two tools instead of one, which increases management overhead even though they're tied together. In addition, the gap between environment and code is wider, and managing the tools separately is difficult.

Bottom Line: Capistrano is a better Rails solution for code deployment than Chef, and the ability to apply fixes quickly may make it the best solution.

5. The AMI-Bake and CRON-Based Chef-Client Method
This deployment method resembles that of the hybrids. However, it provisions features allow auto-propagation of changes because each AMI runs chef-client every N minutes. New AMIs are baked only for major changes. It can provide continuous deployment, but continuous deployment is an aggressive tactic that requires excellent continuous integration on the back end.

Pros: Allows continuous code deployment.

Cons: It's prone to errors if Continuous Integration is not stable. In addition, Chef re-bootstraps aren't reliable and may fail.

Bottom Line: Not recommended unless CI is solid.

6. The Cloud-Init and Docker Method
All indications are that Docker is the best choice for this use case. It comes closer to a bake-everything solution while getting around bake-everything's biggest drawbacks. It allows AMIs to be baked once and rarely changes after that. Both the environment and the app code are contained inside an LXC container, with each AMI consisting of one container. Upon code deployment, a new container is simply pushed, which provides deployment-process flexibility.

Pros: Docker containers provide a history with which one can compare containers, helps with issues of undocumented steps in image creation. Code and environment are tied together. The repository structure of containers leads to faster deployment than does which baking a new AMI. Docker also helps to create a local environment similar to the production environment.

Cons: Docker is still in early phases of development and suffers from some growing pains, including a few bugs, a limited tools ecosystem, some app compatibility issues and a limited feature set.

Bottom Line: If you adopt this approach, you'll be doing considerable trailblazing. There's little information available, so comparing notes with other pioneers will be helpful.

While there are many options for deploying Ruby on Rails in AWS environments, there isn't a single best solution. Taking the time to review the options and tradeoffs can save headaches along the way. Talk to peers and experienced consultants about their experiences before making the final decisions.

What are your comments in regard to using these deployments?

More Stories By Ali Hussain

Ali Hussain is CTO & Co-Founder of Flux7 Labs. He has been designing scalable and distributed systems for the last decade and is an AWS Certified Solutions Architect, Associate Level, earning this recognition with a score of 95%.

He began his career at Intel as part of the performance modeling team for Intel’s Atom microprocessor where he focused on benchmarking, power usage and workload optimization. Ali spent four years focused on performance modeling at ARM, Inc. At ARM he optimized the latency and throughput characteristics of systems, modeled performance, and brought a data-driven methodology to performance analyses. Ali acquired his passion for distributed systems while earning his MS at the University of Illinois at Urbana-Champaign. His Bachelor of Science (High Honors) in Computer Engineering was obtained from the University of Texas at Austin.

His current interests in Flux7 are in Enterprise Migration and configuration management

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Latest Stories
Enterprises have been using both Big Data and virtualization for years. Until recently, however, most enterprises have not combined the two. Big Data's demands for higher levels of performance, the ability to control quality-of-service (QoS), and the ability to adhere to SLAs have kept it on bare metal, apart from the modern data center cloud. With recent technology innovations, we've seen the advantages of bare metal erode to such a degree that the enhanced flexibility and reduced costs that cl...
Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abil...
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, will discuss how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team a...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that e...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
SYS-CON Events announced today that Streamlyzer will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Streamlyzer is a powerful analytics for video streaming service that enables video streaming providers to monitor and analyze QoE (Quality-of-Experience) from end-user devices in real time.
Intelligent machines are here. Robots, self-driving cars, drones, bots and many IoT devices are becoming smarter with Machine Learning. In her session at @ThingsExpo, Sudha Jamthe, CEO of IoTDisruptions.com, will discuss the next wave of business disruption at the junction of IoT and AI, impacting many industries and set to change our lives, work and world as we know it.
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
Established in 1998, Calsoft is a leading software product engineering Services Company specializing in Storage, Networking, Virtualization and Cloud business verticals. Calsoft provides End-to-End Product Development, Quality Assurance Sustenance, Solution Engineering and Professional Services expertise to assist customers in achieving their product development and business goals. The company's deep domain knowledge of Storage, Virtualization, Networking and Cloud verticals helps in delivering ...
The IoT industry is now at a crossroads, between the fast-paced innovation of technologies and the pending mass adoption by global enterprises. The complexity of combining rapidly evolving technologies and the need to establish practices for market acceleration pose a strong challenge to global enterprises as well as IoT vendors. In his session at @ThingsExpo, Clark Smith, senior product manager for Numerex, will discuss how Numerex, as an experienced, established IoT provider, has embraced a ...
DevOps theory promotes a culture of continuous improvement built on collaboration, empowerment, systems thinking, and feedback loops. But how do you collaborate effectively across the traditional silos? How can you make decisions without system-wide visibility? How can you see the whole system when it is spread across teams and locations? How do you close feedback loops across teams and activities delivering complex multi-tier, cloud, container, serverless, and/or API-based services?
Today every business relies on software to drive the innovation necessary for a competitive edge in the Application Economy. This is why collaboration between development and operations, or DevOps, has become IT’s number one priority. Whether you are in Dev or Ops, understanding how to implement a DevOps strategy can deliver faster development cycles, improved software quality, reduced deployment times and overall better experiences for your customers.
Cloud based infrastructure deployment is becoming more and more appealing to customers, from Fortune 500 companies to SMEs due to its pay-as-you-go model. Enterprise storage vendors are able to reach out to these customers by integrating in cloud based deployments; this needs adaptability and interoperability of the products confirming to cloud standards such as OpenStack, CloudStack, or Azure. As compared to off the shelf commodity storage, enterprise storages by its reliability, high-availabil...