Using puppet on Sabayon Linux

I like puppet and I like Sabayon but out of the box they don’t play nicely together. Sabayon is a Gentoo derivative and looks to puppet like a Gentoo system which causes it to use the Gentoo providers for package and service resources. Unlike a stock gentoo install, Sabayon hosts use systemd and a binary package manager (entropy). While entropy is compatible with portage I don’t want to compile things from source on my Sabayon boxes.

I’ve put together a a puppet module ( which adds support for Sabayon by doing the following things:

  • Overriding the operatingsystem fact to distinguish between Gentoo and Sabayon (osfamily still describes it as Gentoo)
  • Adding an init fact which reports whether systemd is running
  • Subclassing the Systemd service provider to make it the default when the init fact reports systemd is in use
  • Adding a has_entropy fact that checks whether equo is available
  • Adding an Entropy package provider that’s the default for systems where has_entropy is true (and when operatingsystem is Sabayon, but that’s mostly a hack to override the portage default)

To use it, just add the module to your environment, and pluignsync will do all the hard work.

Known issue: The entropy provider doesn’t behave well when it tries to install a package which has a new license not already included in [startCodet]license.accept. Equo will present a prompt to accept the license in this case, which fails since there’s no stdin to read the answer from. This causes the prompt to be reprinted in a tight loop while puppet captures the output and fills up /tmp. I’ve got a bug report to change the equo behaviour. Once this issue is resolved I’ll package up the module properly and submit it to the forge, but in the meantime it’s around if people might find it useful.

Puppetenvsh Mcollective Agent

There is no shortage of different ways to setup Puppet and to manage how code is deployed. Like many people, I’m using git to store my puppet code. Perhaps a little less normally, I have multiple puppetmasters. For me these solve two problems; resilience in case one master needs to be taken offline, and geographic diversity which means I can target puppet runs at the nearest master and save a bit of time during puppet runs. This does however raise a different problem: how to keep both masters in sync so that each serves the same content. My answer to this is puppetenvsh, a MCollective agent which is triggered via a git pre-receive hook and updates all the puppet environments on all masters concurrently.

Continue reading