Welcome!

Blog Feed Post

Immutable Infrastructure with Ansible and Packer

Immutable Infrastructure with Ansible and Packer by Marko Locher from Codeship

At Codeship we run immutable servers which we internally call Checkbot. These are the machines responsible for running your tests, deploying your software and reporting the results back to our web application. Of course, there are constant changes to the setup of these images. New software needs to be installed, packages upgraded, old software versions removed. Let’s see how we do that!

Vagrant and Packer Workflow

The software stack used for building and testing these images in our current workflow consists of Vagrant for development, Packer for actual image generation and a series of shell scripts for provisioning. This worked fine for the last years, but as our team grows and more people are making changes to the scripts, this can easily get out of hand and become confusing. So we were looking for a lightweight tool to replace our shell scripts with. As we didn’t want to have an agent running to watch over the host, most configuration management tools were not an acceptable solution.

Using Ansible

Ansible with it’s YAML based syntax and agentless model fits quite nicely. We are still in the process of getting started, but the experience was so good, I couldn’t wait to share my findings. Maybe this post can convince you to take a look at Ansible and get started with configuration management yourself.

Getting started with Ansible

According to their website “Ansible is the simplest way to automate IT”. You could compare it to other configuration management systems like Puppet or Chef. These are complicated to setup and require installation of an agent on every node. Ansible is different. You simply install it on your machine and every command you issue is run via SSH on your servers. There is nothing you need to install on your servers and there are no running agents either.

> # Ansible installation via pip
> $ sudo pip install ansible

Something that took me a while to appreciate was the fact that Ansible playbooks (the pendant to Chef cookbooks or Puppet modules) are plain YAML files. This makes certain aspects a bit harder, but keeps the playbooks simple and easy to understand. (Try writing complicated shell commands with multiple levels of quoting and you will see what I mean.) Even for somebody who doesn’t know a lot about Ansible. For a more thorough introduction, please see the Ansible homepage and don’t forget to check the fantastic docs available at http://docs.ansible.com.

Building Immutable Infrastructure with Ansible

I started with the default integrations in Packer and Vagrant, which are straightforward to setup and require just a few lines of configuration.

Packer

{
    "provisioners": [
        {
            "type": "shell",
            "execute_command": "echo 'vagrant' | {{ .Vars }} sudo -E -S sh '{{ .Path }}'",
            "inline": [
                "sleep 30",
                "apt-add-repository ppa:rquillo/ansible",
                "/usr/bin/apt-get update",
                "/usr/bin/apt-get -y install ansible"
            ]
        },
        {
            "type": "ansible-local",
            "playbook_file": "../ansible/checkbot.yml",
            "role_paths": [
                "../ansible/roles/*"
            ]
        }
    ]
}

Vagrant

# Provisioning with ansible
config.vm.provision "ansible" do |ansible|
    ansible.inventory_path = "ansible/inventory"
    ansible.playbook = "ansible/checkbot.yml"
    ansible.sudo = true
end

But I decided to change those in favor of a couple shell scripts to get more flexibility when calling Ansible. Also it allows me to compensate for certain differences in the way Ansible is integrated with both Packer and Vagrant. As removing any possible differences is key in avoiding subtle bugs in testing vs. production. As an example take our current code for creating a LXC container and configuring some basic settings. I’m sure that, even without any further explanation, you can quite easily figure out what each item is supposed to do.

Config.j2

# Template used to create this container: /usr/share/lxc/templates/lxc-ubuntu
# Parameters passed to the template:
# For additional config options, please look at lxc.conf(5)

# Common configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf

# Container specific configuration
lxc.rootfs = /var/lib/lxc/{{lxc_container}}/rootfs
lxc.mount = /var/lib/lxc/{{lxc_container}}/fstab
lxc.utsname = {{lxc_container}}
lxc.arch = amd64

# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.hwaddr = 00:16:3e:11:f6:6c

# cgroup configuration
lxc.cgroup.memory.limit_in_bytes = {{lxc_memory_limit}}M

# Hooks
lxc.hook.pre-start = /var/lib/lxc/{{lxc_container}}/pre-start

config.yml

---
# file: host/defaults/main.yml

# LXC
lxc_container: codeship
lxc_memory_limit: 15360

lxc.yml

---
# file: host/tasks/lxc.yml

- name: LXC | Installation
  apt:
    pkg: "{{item}}"
    state: present
  with_items:
    - lxc
    - lxc-templates
    - debootstrap
    - bridge-utils
    - socat

- name: LXC | Check configuration
  command: lxc-checkconfig

- name: LXC | Create new container
  command: "lxc-create -n {{lxc_container}} -t ubuntu creates=/var/lib/lxc/{{lxc_container}}/"

- template: src=lxc/config.j2 dest=/var/lib/lxc/{{lxc_container}}/config
- template: src=lxc/pre-start.j2 dest=/var/lib/lxc/{{lxc_container}}/pre-start mode=0744 owner=root group=root

pre-start.j2

#!/bin/sh

# setup ssh access for the root user
mkdir -p /var/lib/lxc/{{lxc_container}}/rootfs/root/.ssh/
cp ~ubuntu/.ssh/id_rsa.pub /var/lib/lxc/{{lxc_container}}/rootfs/root/.ssh/authorized_keys

# setup ssh access for the rof user
if [ -d "/var/lib/lxc/{{lxc_container}}/rootfs/home/rof/" ]; then
  mkdir -p /var/lib/lxc/{{lxc_container}}/rootfs/home/rof/.ssh/
  cp ~ubuntu/.ssh/id_rsa.pub /var/lib/lxc/{{lxc_container}}/rootfs/home/rof/.ssh/authorized_keys
fi

This is only the beginning and a small step in configuring a whole build system for use by Codeship, but it shows the beauty of Ansible. It is extremely simple to understand. It provides a good abstraction of commonly needed patterns, like package installation, templates for configuration files, variables to be used by playbooks or configuration files and a lot more. And it doesn’t require any software installation on the host except an SSH server, which is pretty standard anyways.

And in combination with Packer we have an environment that let’s us build our production system running on EC2 as simple as a box used for development with Vagrant. And that’s great, because it makes our team more productive.

Codeship – A hosted Continuous Deployment platform for web applications

What’s possible with Ansible

Nevertheless we are far from finished. I am just starting to learn what is possible with Ansible and what modules are available. Some of the items on my checklist for the next months include

  • running multiple playbooks in parallel to speed up provisioning
  • getting to know the module system a lot better, and possibly write some modules myself
  • fine tuning the output generated by ansible
  • converting all the remaining shell scripts to playbooks, which is going to be the biggest part

What do YOU think about Ansible? If you have ideas or suggestions to improve our workflow, please let us know in the comments!

Further Information

Read the original blog entry...

More Stories By Manuel Weiss

I am the cofounder of Codeship – a hosted Continuous Integration and Deployment platform for web applications. On the Codeship blog we love to write about Software Testing, Continuos Integration and Deployment. Also check out our weekly screencast series 'Testing Tuesday'!

Latest Stories
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
SYS-CON Events announced today that DXWorldExpo has been named “Global Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Digital Transformation is the key issue driving the global enterprise IT business. Digital Transformation is most prominent among Global 2000 enterprises and government institutions.
SYS-CON Events announced today that Datera, that offers a radically new data management architecture, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera is transforming the traditional datacenter model through modern cloud simplicity. The technology industry is at another major inflection point. The rise of mobile, the Internet of Things, data storage and Big...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
SYS-CON Events announced today that Calligo, an innovative cloud service provider offering mid-sized companies the highest levels of data privacy and security, has been named "Bronze Sponsor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Calligo offers unparalleled application performance guarantees, commercial flexibility and a personalised support service from its globally located cloud plat...
"We focus on SAP workloads because they are among the most powerful but somewhat challenging workloads out there to take into public cloud," explained Swen Conrad, CEO of Ocean9, Inc., in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"Outscale was founded in 2010, is based in France, is a strategic partner to Dassault Systémes and has done quite a bit of work with divisions of Dassault," explained Jackie Funk, Digital Marketing exec at Outscale, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We are still a relatively small software house and we are focusing on certain industries like FinTech, med tech, energy and utilities. We help our customers with their digital transformation," noted Piotr Stawinski, Founder and CEO of EARP Integration, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"I think DevOps is now a rambunctious teenager – it’s starting to get a mind of its own, wanting to get its own things but it still needs some adult supervision," explained Thomas Hooker, VP of marketing at CollabNet, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We've been engaging with a lot of customers including Panasonic, we've been involved with Cisco and now we're working with the U.S. government - the Department of Homeland Security," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held June 6-8, 2017, at the Javits Center in New York City, NY.
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
"We're here to tell the world about our cloud-scale infrastructure that we have at Juniper combined with the world-class security that we put into the cloud," explained Lisa Guess, VP of Systems Engineering at Juniper Networks, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
"We want to show that our solution is far less expensive with a much better total cost of ownership so we announced several key features. One is called geo-distributed erasure coding, another is support for KVM and we introduced a new capability called Multi-Part," explained Tim Desai, Senior Product Marketing Manager at Hitachi Data Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.