OH: We Use Puppet, Chef and a Grain of Salt.

Choice is great, as long as you make a great choice.

In the configuration management tools space we are spoilt for choice. This wasn’t so a couple of years back. I would never want to go back to those hand-cranking days where deployment and release management tasks were simply a nightmare.

But with choices, an added responsibiity falls upon our shoulders to actually make a choice and stick with it.

Of recent I have been noticing a trend in various projects. I joined a startup team last year and was grokking through the infrastructure codebase and noticed that we had some “manifests of puppet and chef recipes”. When I asked why this was so, I got a response along the lines of puppet rocks at doing “x” stuff and chef was awesome at doing “y” stuff.

In another company I found a mix of puppet and ansible code in the infrastructure repo. Again I asked, and got a response along the lines of ansible was good at doing “x” stuff while puppet aced “y” stuff.

A few days ago I came across a company that was using chef, puppet and salt! At that point I decided this has just got to stop and here is the rant:)

One of the reasons we roll-out infrastructure as code today is because it allows us to have a single source of truth that prescribes how a company’s infrastructure is provisioned and our applications are deployed. Oxford English Dictionary defines prescribe as “STATE AUTHORITATIVELY or as a rule that (an action or procedure) should be carried out.”

Having bits and pieces of infrastructure coded using various configuration management tools eventually leads to attrition, where the infrastructure code is reduced from its’ authoritative stance to a bunch of opinions.

The configuration management space is a broad category with various tools providing complementary features and you sometimes need to daisy-chain a few together to achieve the required results. However, there is a lot of overlap and the tools should be selected carefully ensuring functionality duplication is reduced to the bare minimum. Having a git repo that includes scripts for deploying nginx with puppet and chef for the same infrastructure is not what you want to do.

A few things to note when selecting configuration management tools:

  • Consider the features you need: You might need a tool that does provisioning, orchestration and scales well on public/private cloud infrastructure. The more features you can effectively nail down with one tool the better.

  • Consider ease of use and learning curve: Hi-five to you if everyone defaults to using the selected tool because it is easy or easy to learn. Selecting a tool along the path of least resistance bolsters productivity.

  • Consider the culture of the company: Each tool embraces a slightly different philosophy and paradigm. Select a tool that is in sync with the culture you are trying to elicit.

  • Consider the skills available in the company: Selecting a tool built around a language that developers and operations folk are comfortable with promotes convergence of ideas and practices amongst the two camps.

  • Consider the hiring challenge: Engineers come and go. Being able to easily source talent skilled in the tool selected should not be over-looked.

  • Consider the level of support required and available for your selected tool: The best tools are transparent and get out of the engineers way. The less time spent figuring out a.k.a googling for solutions the better.

  • Consider your companys’ future and the selected tools’ roadmap: The communities around some tools are very active and are possibly evolving in a way that is in alignment with your projected infrastructure trajectory.

I love hacking on new tools and my favourite toy changes all the time. However when it comes to business infrastructure, lets play fair – one game at a time! ^_^

If you like this post, sign-up for my upcoming book “DevOps 101, An Introduction to DevOps Culture and Practice”

2013 – DevOps Retrospective

Usain Bolts is probably my favourite athlete of all time. I mean, it’s hard not to like a guy who wins gold medals consecutively. He makes the 100m and 200m Men’s race look so easy, anybody could pull it off – In dream land!

2013 has flown by really fast and I find myself constantly gasping for air as though I had just finished a 50m dash. “Where did the year go ?” I keep asking myself. No doubt I had lots of personal projects running in tandem, but I reckon not much more than in previous years.

One thing that stood out for me this year was the explosion in the DevOps community. Right from the beginning of the year we were hit by the release of Docker, an open source project to pack, ship and run any application as a lightweight contain from Docker Inc. formerly known as Dot Cloud. The unveiling of Docker by Solomon Hykes at Pycon 2013, literally sparked a fire that got everyone re-thinking how we make application deployments truly ubiquitous. Several docker-based projects emerged including Dokku – a Docker-powered PaaS implementation.

Cloud-based Projects have been on the increase. Many traditional shops moved their infrastructure into Public or Private Clouds. AWS, leading the Public Cloud race has continued to roll-out new services to support cloud deployments. On the private cloud front,the Openstack project gained a lot of traction. The Docker fever also got caught in the Openstack Cloud – The Openstack Havana release introduced native support for Docker Containers.

Docker, in collaboration with RedHat now runs on unmodified Linux kernels and all major distributions. RHEL6.5 released in November supports docker and the upcoming RHEL 7 release is billed to provide further integration.

Vagrant, the popular tool for managing development environments expanded the virtualization platforms it supports. VMware plugins are available that significantly improve performance. Also a Vagrant AWS plugin makes it easy to provide a seamless development workflow for developers on EC2 instances. Plugins are available to support other Service Providers including RackSpace, Digital Ocean, Joyent etc. Also, the current release of Vagrant includes a docker provisioner.

Configuration Management tools have not been left behind. Puppetlabs and Chef formerly known as Opscode have both received rounds of funding. More recent tools like Ansible and Salt have become increasingly popular on customer sites.

Perhaps the biggest news is the DevOps revolution in the networking world. Software Defined Networks (SDN), which decouple the decision-making plane from the forwarding plane on networking gear, make it possible to automate the management of traditional networking devices. Openflow is the first communications protocol that implements the SDN architecture. Google’s Urs Hölzle stated last year that “the idea behind this advance is the most significant change in networking in the entire lifetime of Google.”

In November, Cisco launched the NEXUS-9000 family of switches embedded with puppet, chef, and the Openstack network plugin. This move signals that in the coming year, DevOps folk would most likely be swimming deeper in the networking arena.

This is just a brief overview of some of the developments in the DevOps Community that I came across this year. While we have more tools available to us, keeping up with the times is challenging. Hiring and addressing the skill-gap is becoming more and more critical as many companies seek to improve their software delivery process.

My upcoming book DevOps 101, An Introduction to DevOps Culture and Practice attempts to help professionals and companies bridge this growing gap. You can sign-up here to be notified when the book is launched.

It will be interesting to watch how things unfold in 2014. One thing is certain though – We will not be hanging our running shoes anytime soon.

Wake Up the Sun

Peering out through my son’s bedroom window was nothing out there, but utter pitch darkness. Alas, the time was only 16:30. I sighed to myself – It was that time of the year again.

I never really liked winter. The weather had dropped to below freezing point. You felt the chills in your knuckles, up your spine, under your feet and the cold whistling air all over your face. I dreaded having to alight every morning and walk 400yds to the train station on my commute to work. With my teeth chattering and body shivering trudging along the slippery snow path, I would contemplate whether my wife and I had made a good decision to migrate to England 6 years ago. I would often yearn for the sunny tropical climate that I was used to all-year round, in my home country. But I always said to myself, “the winter will soon be over and summer will be back again”. Yes indeed – In only 3 months’ time.

I could hardly wait to run back home to my lovely wife and 3-year old son everyday. Their love and energy more than provided the warmth I needed to make it through yet another day. “Daddy, I want to go to the shops!” – The voice of my son cut through my self-loathing thoughts of despair, jolting me back into the present. I told him it was too late to go to the shops now, as the sun had gone down and it was now time for bed.

My son boisterous with energy was always ready for an adventure in a heartbeat. As far as he was concerned, winter was simply fun time. He was not be-riddled with the many thoughts that shuttle through a parents mind on a school week-night. I was just pre-occupied with wrapping up the day and getting him into bed. Thankfully, there was one good thing about winter; the early fall of night always made it easier to convince him it was time to shutdown for the day.

Thinking that the discussion was over and done with, I proceeded to draw the curtains and pick up the toys littered across the bedroom floor. Then he said, “Daddy all we have to do is wake up the sun and then we can go to the shop”. In a familiar reflex manner, I prepped myself and thought how I might do the fatherly-thing of seizing the opportunity to explain to my son a bit of science, breaking down how the celestial bodies played together la-di-da-di-da – to this curious toddler. After all, I had been around long enough to know that his line of reasoning was misaligned and would need a bit of tweaking.

Then in a split-second his words hit me like a bolt of lightning – Wake up the sun!

In my son’s world there was nothing unfathomable about that idea; the sun could absolutely wake up and we could indeed hit the shops with the sun shining on us right at that moment. His paradigm had not been seared with earthly “experience”. Entertaining such a thought was normal, entirely possible and totally expected.

Several years of working as an Operations Engineer had taught me a lot about a related idea – Follow-the-sun-model. I had supported various Linux systems in a 24-hr follow-the-sun model where we would do a shift and handover to a team in a different time-zone to continue proving support to customers round the clock. I was pretty intimate with the practice of following the sun, perhaps too well. But the concept of waking up the sun was alien to me.

Waking up the sun is all about viewing the world right side up. Envisioning the world, the way it should be. Bravely going after your dreams knowing fully well that your paradigm is indeed normal, entirely possible and firmly within reach.

That has been the mantra for the startup journey my wife and I embarked on over a year ago. Well, to be honest mostly my wife and a little bit of me. You see, my wife is totally awesome. Yes I said so myself. She has one challenge though I have to admit – she is a first class citizen of the species of people branded “Creatives”. Let me spell it out, this means she can do everything – from photography, visual design, sewing, basket-making, fund-raising, business planning, public speaking, authoring, script-writing, guitar-strumming, hair-making, coding, farming, interior-decor, perfume-making, mixing awesome body creams and soaps and a whole slew of other things too numerous to mention.

So after much deliberation, we finally settled on an industry that was overdue for a shake-up. It was time to wake up the sun. Driven by the pure desire to set the records straight, we recently launched our first startup – Sheatruth.com

We are truly excited about Sheatruth and the potential it holds. The journey so far has been chock-full of challenges, but the desire to shine the light on the personal body care industry was too great a dream to let down, and too precious an opportunity to pass-up. We firmly believe that people everywhere are worthy of true luxury, and want to put the care back in personal care products.

I wish I could tell you that my son and I went ahead to go shopping that winter night, but the fairy tale didn’t end that way I am afraid. However my son taught me an important lesson, I hope never to forget – rather than follow-the-sun, WAKE UP THE SUN!

I don’t know where you are in your journey, but we challenge you to look at the world around you again, only but this time around, with fresh eyes, the eyes of a three-year old and go wake up the sun.

Isn’t he just the cutest ever?

DevOps101 Coming Soon

I have ignored my blog for quite a while; not intentionally though. So much has been going on personally and professionally, that blogging has been totally squeezed out. But I am fixing that.

So many tools and approaches have grown in the DevOps community. In the coming days I will be talking about them, so do stay tuned.

One thing that has kept me busy is my upcoming book on DevOps – DevOps 101. It’s taken me quite a while to get it finished, but I am delighted that it would be out in a couple of weeks. I will be talking more about this in some upcoming posts, so do please check back.

That’s all I have to say for now. Thanks and have a great time!

Deploying Nodejs Nginx Upstart Monit Redis on Ec2 With Puppet and Vagrant

In a previous post I wrote about automating deployments on AWS Cloud instances with vagrant and puppet. Today I will be describing how we deploy Node.js with nginx,redis, upstart, monit on ec2 using vagrant and puppet.

Our developers are rewriting an existing application in Node.js and we basically needed a consistent environment to play with, which would mirror the target production environment as much as possible. So we picked out technology stack as follows:

  • node.js – obviously
  • nginx – As a reverse proxy server. We are using nginx 1.5.0 which has websocket support. You could also use the stable nginx version that comes with your linux distribution in combination with varnish or haproxy to handle websocket connection requests. If you are using nginx development builds, make sure you upgrade to nginx 1.5.0 or 1.4.1 to address the recent buffer overflow vulnerability security issue.
  • upstart – to daemonize the node app
  • monit – to proactively check that the app is actually humming nicely.
  • redis – for handling session data

0ur approach is simply to start with a basic configuration which works well and fine-tune as we go along. The description below doesn’t apply strictly to Node.js deployments; it could really be adapted for other web-apps/frameworks e.g. python/django/gunicorn behind nginx.

Setup

First of all clone the nodejs_deployment repo as follows:–

clone the node.js deployment repolink
1
$ git clone git@github.com:pidah/nodejs_deployment.git

switch into the nodejs_deployment directory and have a look at the contents –

review the contents of nodejs_deploymentlink
1
2
3
$ cd nodejs_deployment/
$ ls
README.md Vagrantfile app.js      package.json    puppet

Node.js app

app.js shown below is a very simple node application listening on port 3000. The package.json file contains the node app dependencies. This app only depends on express.

node.js applink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ cat app.js
var express = require("express");
var app = express();
app.use(express.logger());

app.get('/', function(request, response) {
  response.send('My awesome node app!');
});

var port = process.env.PORT || 3000;
app.listen(port, function() {
  console.log("Listening on " + port);
});

$ cat package.json
{
  "author": "Peter Idah",
  "name": "awesome-app",
  "version": "0.0.1",
  "dependencies": {
    "express": "~3.1.0"
  }
}

Vagrantfile

The Vagrantfile holds the vagrant configuration –

Vagrantfilelink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu_aws"
  config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
  config.vm.synced_folder ".", "/vagrant", id: "vagrant-root"
  config.vm.provider :aws do |aws, override|
    aws.keypair_name = "development"
    override.ssh.private_key_path = "~/.ssh/development.pem"
    aws.instance_type = "t1.micro"
    aws.security_groups = "development"
    aws.ami = "ami-c5afc2ac"
    override.ssh.username = "ubuntu"
    aws.tags = {
      'Name' => 'Nodejs App',
     }
  end

  config.vm.provision :puppet do |puppet|
    puppet.manifests_path = "puppet/manifests"
    puppet.manifest_file  = "init.pp"
    puppet.options = ["--fileserverconfig=/vagrant/puppet/fileserver.conf"]
  end

The Vagrantfile will need to be updated with your AWS details. I am using a custom AMI with puppet baked in. Then you need to setup your AWS keys in your local ~/.profile file as follows

Configuring AWS Credentials in ~/.profile
1
2
 export AWS_ACCESS_KEY="AKXXXXXXXXXXXXXXX"
 export AWS_SECRET_KEY="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

Then source the ~/.profile to make the variables available in the current shell session

source the ~/.profile file
1
 source ~/.profile

Every AWS EC2 instance would have an ssh key pair. I keep my ssh private key in ~/.ssh/development.pem

For more details on the Vagrantfile, have a look at automated deployments of ec2 instances with vagrant and puppet.

puppet configuration overview

The puppet configuration layout is shown below –

puppet manifests and fileslink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ ls puppet/manifests
init.pp       nginx.pp    redis.pp
monit.pp  nodejs.pp   upstart.pp

$ cat puppet/manifests/init.pp
include nodejs
include monit
include nginx
include upstart
include redis

$ ls puppet/files
app.conf  deploy_node.sh  monitrc     nginx.conf

$ cat puppet/fileserver.conf
# Puppet template files directory
[files]
    path /opt/node/puppet/files
    allow *
$ cat puppet/manifests/init.pp

Each component is in a seperate manitfest file. We include all of them init.pp as shown above. puppet/fileserver.conf tells puppet to serve files from our custom mount point /opt/node/puppet/files. More information on this is available on puppetlabs website. Let’s go through each manifest configuration file:

node.js config

The following shows the node.js puppet configuration files

node.js puppet configurationlink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
$ cat puppet/manifests/nodejs.pp
class nodejs {

    file { "/opt/node":
        ensure  => "link",
        target  => "/vagrant",
        force   => true,
    }

    exec { "apt-get update":
        path => "/usr/bin",
        require => File["/opt/node"]
    }

    $nodejs_deps = [ "python-software-properties", "g++", "make", "git", "vim" ]
        package { $nodejs_deps:
        ensure => installed,
        require => Exec["apt-get update"],
    }

    file { "/tmp/deploy_node.sh":
        ensure  => present,
        mode    => '0775',
        source  => "puppet:///files/deploy_node.sh",
        require => Package[$nodejs_deps]
    }

    exec { "install_node":
        command => "/bin/bash /tmp/deploy_node.sh",
        path => "/usr/bin:/usr/local/bin:/bin:/usr/sbin:/sbin",
        timeout => 0,
        unless => "ls /usr/local/bin/node ",
        require => File["/tmp/deploy_node.sh"]
    }

    exec { "npm_install":
        cwd => "/opt/node",
        command => "npm install",
        path => "/usr/bin:/usr/local/bin:/bin:/usr/sbin:/sbin",
        require => Exec["install_node"]
    }

}

$ cat puppet/files/deploy_node.sh
#!/bin/bash -x
version=0.10.5
mkdir /tmp/nodejs && cd $_
wget -N http://nodejs.org/dist/v${version}/node-v${version}.tar.gz
tar xzvf node-v${version}.tar.gz && cd node-v${version}
./configure
make install

The app is deployed to /opt/node which is sym-linked to /vagrant on the ec2 instance. The node.js package provided by ubuntu is really old, so we decided to build node.js from source as shown in the deploy_node.sh file above. The make build takes longer than 5 minutes which is the default timeout for the puppet exec resource, so I set timeout =>0 in exec {"install_node":} above to prevent a timeout. Subsequent vagrant provision runs would just do a quick check to confirm that node is already installed. Alternatively you could use the node.js packages available at https://launchpad.net/~chris-lea/+archive/node.js/

upstart config

upstart configuration filelink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
$ cat puppet/manifests/upstart.pp
class upstart {

    file { "/etc/init/app.conf":
        ensure  => file,
        source  => "puppet:///files/app.conf",
        require => Class["Nodejs"],
    }

    service { 'app':
        ensure => running,
        provider => 'upstart',
        require => File['/etc/init/app.conf'],
    }

}

  $ cat puppet/files/app.conf
description "node.js app server"
author      "Peter Idah"

env PROGRAM_NAME="awesome-app"
env FULL_PATH="/opt/node"
env FILE_NAME="app.js"
env NODE_PATH="/usr/local/bin/node"

start on startup
stop on shutdown

script

    echo $$ > /var/run/$PROGRAM_NAME.pid
    cd $FULL_PATH
    exec $NODE_PATH $FULL_PATH/$FILE_NAME >> $FULL_PATH/node_app.log 2>&1
end script

pre-start script
    # Date format same as (new Date()).toISOString() for consistency
    echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Starting" >> $FULL_PATH/node_app.log
end script

pre-stop script
    rm /var/run/$PROGRAM_NAME.pid
    echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Stopping" >> $FULL_PATH/node_app.log
end script

The upstart.pp manifest above ensures the app is running with the configuration sourced from /etc/init/app.conf. This file daemonizes the application, specifies the process id location, logfile and the start/stop control scripts.

monit config

monit config filelink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
$ cat puppet/manifests/monit.pp
class monit {

    package { "monit":
        ensure  => latest,
        require => Class["Nodejs"],
    }

    file { '/etc/monit/monitrc':
        ensure  => present,
        mode    => '0600',
        owner   => 'root',
        group   => 'root',
        source  => "puppet:///files/monitrc",
        notify  => Service['monit'],
        require => Package["monit"],
    }

    service { 'monit':
        ensure     => running,
        enable     => true,
        hasrestart => true,
        require    => File['/etc/monit/monitrc'],
        subscribe  => File['/etc/monit/monitrc'],
        notify     => Service['app'],
    }

}

$ cat puppet/files/monitrc
#!monit
set logfile /opt/node/monit_app.log

check process nodejs with pidfile "/var/run/awesome-app.pid"
    start program = "/sbin/start app"
    stop program  = "/sbin/stop app"
    if failed port 3000 protocol HTTP
        request /
        with timeout 2 seconds
        then restart
    if cpu > 80% for 10 cycles then restart
    if 3 restarts within 10 cycles then timeout

The monit.pp manifest above ensures the latest monit package is installed and running. It will also trigger a restart if it detects changes to the contents of /etc/monit/monitrc config file. The monitrc config file sets the location of the monit log file, checks the process id and specifies the path to the start/stop scripts for the node app. Then it gets a bit more interesting with a few rules – in the 1st rule monit checks if there is a failed request to the root of the node app listening on port 3000 for 2 seconds, monit will restart the application. The 2nd rule checks for cpu usage above 80% for 10 cpu cycles and trigers a restart of the app. There are several monit rules you can add, but that’s the general idea.

nginx config

nginx config filelink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
$ cat puppet/manifests/nginx.pp
class nginx {

    exec { "add_nginx_repo":
            command => "add-apt-repository ppa:nginx/development --yes && apt-get update ",
            path => "/usr/bin",
            require => Class[Nodejs]
    }

    exec { "install_nginx":
        command => "/usr/bin/apt-get install nginx -y --force-yes",
        path => "/usr/bin:/usr/local/bin:/bin:/usr/sbin:/sbin",
        unless => "ls /usr/sbin/nginx ",
        require => Exec["add_nginx_repo"]
    }
    service { 'nginx':
        ensure     => running,
        enable     => true,
        hasrestart => true,
        require    => Exec['install_nginx'],
    }

    file { "/etc/nginx/nginx.conf":
        ensure  => present,
        mode    => '0644',
        source  => "puppet:///files/nginx.conf",
        notify => Service['nginx'],
        require => Exec["install_nginx"],
    }

}

$ cat puppet/files/nginx.conf
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;



events {
        worker_connections 768;
}

http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;
        gzip_disable "msie6";


        include /etc/nginx/conf.d/*.conf;

upstream nodejs_backend {
    server 127.0.0.1:3000;
}

# the nginx server instance

    server {
        listen 80 default;
        server_name 127.0.0.1;
        access_log /opt/node/nginx_app.log;

    # pass the request to the node.js server with the correct headers
    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;
        proxy_pass http://nodejs_backend/;
        proxy_redirect off;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        }
    }
}

The nginx.pp manifest above installs nginx 1.5.0 from the nginx ppa development repo, ensures the service is running as specified in nginx.conf and would trigger a restart if changes are detected in nginx.conf. The nginx.conf file above specifies the web server should listen on port 80 and hands requests to the upstream nodejs_backend server listening on port 3000. The following three lines are required for websocket connections in nginx version 1.4+

nginx config filelink
1
2
3
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

redis config

The redis config file below ensures that the redis-server is installed and running

redis config filelink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class redis {

    package { "redis-server":
            ensure  => installed,
            require => Class["Nodejs"],
    }

    service { 'redis-server':
        ensure     => running,
        enable     => true,
        hasrestart => true,
        require    => Package['redis-server'],
    }

}

Deploying the instance

As described in the README.md file, the next step is to launch the instance as follows

launching the instance
1
$ vagrant up --provider=aws

You can then use your regular vagrant commands as usual e.g, to ssh into your instance

login to the instance
1
$ vagrant ssh

You can get the public DNS name of the instance with the vagrant ssh-config command:

get the public dns name of ec2 instance
1
2
3
4
5
6
7
8
9
10
11
12
$ vagrant ssh-config

Host nodejs
  HostName ec2-184-73-111-79.compute-1.amazonaws.com
  User ubuntu
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile "/Users/Peter/.ssh/development.pem"
  IdentitiesOnly yes
  LogLevel FATAL

You can then access the app from your browser e.g http://ec2-184-73-111-79.compute-1.amazonaws.com or using curl as shown below:

testing the app with curl
1
2
$ curl http://ec2-184-73-111-79.compute-1.amazonaws.com
My awesome node app!$ 

That’s all folks. Thanks for dropping by!

Automated Deployment of AWS EC2 Instances With Vagrant and Puppet

The DevOps landscape is continuously evolving at an ever-increasing pace. Some of the tools and approaches that I had gotten accustomed to have just gotten better providing me with more options to play with, which is always a good thing.

One of the tools we use in our delivery process is vagrant by Mitchell Hashimoto. This allows us to provide all our developers an isolated environment to deploy and test applications on their local machines repeatedly and consistently.

By default, vagrant runs on virtualbox virtual machines. But, recently vagrant has been extended to run on AWS EC2, as well as other environments. This move has already impacted our tool chain and there are a number of other interesting features like parallel provisioning on AWS on the horizon.

So, to get started, you can get the latest version of vagrant (1.2.2 as of this post) at http://downloads.vagrantup.com/ for your operating system.

Then install the vagrant-aws plugin as follows –

Install vagrant-aws pluginlink
1
$ vagrant plugin install vagrant-aws

The next step is to setup your local environment with AWS credentials. Many projects would likely commit their Vagrant configuration to some version control repository. However it’s best to reference your local environment for the required keys. To do so, add the following to your local users’ ~/.profile file

Configuring AWS Credentials on your local machine
1
2
 export AWS_ACCESS_KEY="AKXXXXXXXXXXXXXXX"
 export AWS_SECRET_KEY="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

Then we need to source the ~/.profile to make the variables available in the current shell session

source the ~/.profile file
1
 source ~/.profile

Every AWS EC2 instance would have an ssh key pair. I keep my ssh private key in ~/.ssh/development.pem

Now that we have our local environment in shape, we need to setup our vagrant configuration file – Vagrantfile. Copy this example Vagrantfile to your project root and modify with your details accordingly –

Example Vagrantfile for an AWS EC2 instance
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu_aws"
  config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
  config.vm.synced_folder "../.", "/vagrant", id: "vagrant-root"
  config.vm.provider :aws do |aws, override|
    aws.keypair_name = "development"
    override.ssh.private_key_path = "~/.ssh/development.pem"
    aws.instance_type = "t1.micro"
    aws.security_groups = "development"
    aws.ami = "ami-c5afc2ac"
    override.ssh.username = "ubuntu"
    aws.tags = {
      'Name' => 'Web App',
     }
  end

  config.vm.provision :puppet do |puppet|
    puppet.manifests_path = "puppet/manifests"
    puppet.manifest_file  = "init.pp"
    puppet.options = ["--fileserverconfig=/vagrant/puppet/fileserver.conf"]
  end
end

Then run the following to launch the instance from your project root –

launch EC2 instancelink
1
$ vagrant up --provider=aws

And there you have it ! You can ssh into the instance with the following –

ssh into the EC2 instancelink
1
$ vagrant ssh

Enjoy!

Hi!

Hi! my name is Peter Idah. I am a Linux DevOps Engineer currently based in London.

I’m setting up this blog for a few reasons – to help me keep track of things I am working on, things I am learning and to share this experience with the hope that the information will be handy to someone else.

I will try to keep the site updated as much as possible, so if you don’t find anything interesting today, please come back again.

Thanks for dropping by !