Puppetenvsh Mcollective Agent

There is no shortage of different ways to setup Puppet and to manage how code is deployed. Like many people, I’m using git to store my puppet code. Perhaps a little less normally, I have multiple puppetmasters. For me these solve two problems; resilience in case one master needs to be taken offline, and geographic diversity which means I can target puppet runs at the nearest master and save a bit of time during puppet runs. This does however raise a different problem: how to keep both masters in sync so that each serves the same content. My answer to this is puppetenvsh, a MCollective agent which is triggered via a git pre-receive hook and updates all the puppet environments on all masters concurrently.

The plugin

Puppetenvsh will analyse the contents of your puppet dynamic environments directory, your puppet git branches, and keep them in sync. It exposes agent actions to add a new directory environment when a new branch is added to git, remove one when the branch is deleted, and update the environments when the branches change. There’s an update-all action that will do all of the above to make sure your directory environments match what’s stored in git.

It supports having modules directly in the main puppet repository, modules that live in git submodules, and external modules to be managed by librarian puppet.

For example, updating all environments on all puppetmasters immediately:

mco puppetenvsh update-all -C profiles::puppetmaster

Updating a single development branch on a particular master for testing:

mco puppetenvsh update develop_myfeature -I /master1/

Setting up MCollective to run puppetenvsh

First up you’ll need to install it. I have pre-built packages for Sabayon in the packages.sihnon.net repository and ebuilds for Gentoo in my gentoo-overlay. These are one-size fits all packages so would be installed on both your puppetmasters and any machines you run the mco client from.

Packages for RedHat or Debian systems can easily be built using the mco plugin package command which will build separate agent, client and common packages. The agent (and common) packages should be installed on your puppetmasters, and the client package on whichever machines you run mco.

Once installed, the plugin may need some configuration on your puppetmaster machines. The README lists all the available parameters and what they do, but the simple ones you might need straight off are basedir and use_librarian. These will tell puppetenvsh where your directory environments are located (if somewhere other than /etc/puppet/environments, and whether or not to make use of librarian-puppet to manage your modules.

If you’re using the puppetlabs-mcollective module to manage your mcollective configuration, you might want something like the following:

mcollective::plugin {
    'puppetenvsh':
        package => true;
}

mcollective::server::setting {
    'plugin.puppetenvsh.basedir':
        value => '/home/puppet/environments';

    'plugin.puppetenvsh.use_librarian':
        value => 'true'
}

One final gotcha is that RHEL/CentOS 6 don’t have a working version of librarian-puppet which runs under ruby 1.8 and can download modules from the forge. This plugin supports sourcing an alternate ruby193 environment before trying to execute  librarian, such as the one provided by the SCL. If you need to make use of this, also set the use_ruby193 setting to true and if needed tweak the value of ruby193_env to point at the appropriate script to source your alternate ruby environment.

After restrting your mcollective daemons you should be all set; send a test command to make sure everything is working:

mco puppetenvsh list

Setting up Git to trigger updates

Once you’ve got mcollective able to update the dynamic environments, the next step is to setup a git repository hook to trigger this for you automatically. The first thing you’ll need is allow the user you’re running your git as to send mcollective commands; at work I use stash which run under the stash account.

I re-use my puppet SSL infrastructure for mcollective. Each user/tool has their own keypair generated using the puppet cert generate command and manually copied into the right place. The public key is then added to git and pushed out to all mcollective nodes so they can receive commands. Therefore I need to generate a new keypair for the git user:

sudo puppet cert generate stash

This generates the following files on the puppetmaster:

  • /var/lib/puppet/ssl/ca/signed/gitlab.pem
  • /var/lib/puppet/ssl/public_keys/stash.pem
  • /var/lib/puppet/ssl/private_keys/stash.pem

First up we need to add the public_key to the directory that’s pushed out to all mcollective servers, so they can decrypt the commands send by the stash user. This is the directory in puppet which is pointed at by the ssl_client_certs parameter passed to ::mcollective.

sudo cp /var/lib/puppet/ssl/public_keys/stash.pem ~/git/puppet/modules/site_mcollective/files/client_certs/stash.pem

Next we need to define an mcollective user for stash in puppet on the stash machine. This will configure the mcollective client config file and push the public key for us into the stash user’s home directory. For security, we’ll manually install the private key outside of puppet:

mcollective::user {
    'stash':
        certificate => 'puppet:///modules/site_mcollective/client_certs/stash.pem';
}

Now we do a puppet run on the stash machine to create the configuration files and directories, and copy the private_key into place:

sudo puppet agent --test --tags site_mcollective
sudo scp puppetmaster:/var/lib/puppet/ssl/private_keys/stash.pem ~stash/.mcollective.d/credentials/private_keys/stah.pem

For security I use the actionpolicy authorisation plugin in mcollective to control which users can execute which comands. The stash user should most definitely be locked down to only execute puppetenvsh commands! I configure this with the following:

mcollective::actionpolicy::rule {
    'stash-puppetenvsh-update':
        agent    => 'puppetenvsh',
        callerid => 'stash',
        actions  => 'update-all',
        classes  => 'profile::puppet::master';
}

Finally, we setup a git post-receive hook to execute the mcollective action after new changes have been pushed to stash

#!/bin/bash
mco puppetenvsh update-all -C profile::puppet::master

Where next?

Now that code is being automatically pushed straight from git to the puppet masters, it would be very easy for a bug (such as a mistakenly committed syntax error) to start affecting production machines doing puppet runs. It’s a good idea to setup testing and validation to try and catch these before they happen.

Another git pre-receive hook could be added to run static analysis tools (such as puppet parser validate) and prevent any commits containing broken code from being accepted into git.

Static analysis will catch any obvious typos but won’t help much with semantic issues, such as missing dependencies or unset variables. To move up to the next level will involve some runtime testing. Fortunately we have tools such as rspec-puppet and jenkins that can help with these, but are definitely the topic for another post.

Alternatives

r10k has become very popular in the puppet community recently. This is a slightly more heavyweight tool which can do more complicated things such as pull in hieradata from a separate source than your main “control” repository, and manage modules directly without involving external tools. It differs from puppetenvsh in the following main ways:

  • It’s a tool that runs locally on a single puppet master so you need to hook it into some kind of orchestration tool in order to trigger runs externally or to manage multiple machines at the same time.
  • It doesn’t support git submodules so needs you to have already gone through a process of externalising your modules
  • While it can pull modules from the puppet forge, it doesn’t (currently) handle dependencies for you so you need to manually add them to the Puppetfile.

Comments

2 responses to “Puppetenvsh Mcollective Agent”

  1. […] Roberts hat einen interessanten Artikel über alternatives Deployment von Puppet geschrieben. Ihn, wie auch mich, findet man aktuell in #r10k auf […]

  2. Wow. That is so elegant and logical and clearly explained. Brilliantly goes through what could be a complex process and makes it obvious.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.