Configure puppet to manage a cluster

In part 1 of this discussion, Puppet was installed and tested in a VirtualBox test scenario.  Now, as part 2, let’s apply Puppet to a linux cluster. This article covers configuring Mongrel, installing Puppet on the client nodes with autosigned certs, and adding the Puppet dependencies to our kickstart file.

Install Mongrel:

We will need a more robust web service than the standard Webrick, so let’s install Mongrel along with Puppet. Mongrel (or Passenger) is an alternative that you will want if deploying Puppet to more than a few nodes.

yum install puppet puppet-server rubygem-mongrel httpd mod_ssl

Confirm ports are open: 8140, 18140, 18141, 18142, 18143

Uncomment the PORTS line in /etc/sysconfig/puppetmaster:

PUPPETMASTER_PORTS=( 18140 18141 18142 18143 )

Configure mongrel with apache:

# /etc/httpd/conf.d/puppet.conf:
Listen 8140

<Proxy balancer://puppetmaster>

# Modify the fully qualified domain name for your server in SSLCertificateFile and SSLCertificateKeyFile

<VirtualHost *:8140>
        SSLEngine On
        SSLCipherSuite SSLv2:-LOW:-EXPORT:RC4+RSA
        SSLCertificateFile /var/lib/puppet/ssl/certs/
        SSLCertificateKeyFile /var/lib/puppet/ssl/private_keys/
        SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem
        SSLCACertificateFile /var/lib/puppet/ssl/ca/ca_crt.pem
        SSLCARevocationFile /var/lib/puppet/ssl/ca/ca_crl.pem
        SSLVerifyClient optional
        SSLVerifyDepth 1
        SSLOptions +StdEnvVars

        RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
        RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e

        <Location />
                SetHandler balancer-manager
                Order allow,deny
                Allow from all

        ProxyPass / balancer://puppetmaster/
        ProxyPassReverse / balancer://puppetmaster/
        ProxyPreserveHost On

        ErrorLog /var/log/httpd/balancer_error_log
        CustomLog /var/log/httpd/balancer_access_log combined
        CustomLog /var/log/httpd/balancer_ssl_requests "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"


When you restart httpd and puppetmaster, you should see multiple processes:

puppet    4996     1  0 20:28 ?        00:00:00 /usr/bin/ruby /usr/sbin/puppetmasterd --servertype=mongrel --masterport=18140 --pidfile=/var/run/puppet/
puppet    5043     1  0 20:28 ?        00:00:00 /usr/bin/ruby /usr/sbin/puppetmasterd --servertype=mongrel --masterport=18141 --pidfile=/var/run/puppet/
puppet    5090     1  0 20:28 ?        00:00:00 /usr/bin/ruby /usr/sbin/puppetmasterd --servertype=mongrel --masterport=18142 --pidfile=/var/run/puppet/
puppet    5137     1  0 20:28 ?        00:00:00 /usr/bin/ruby /usr/sbin/puppetmasterd --servertype=mongrel --masterport=18143 --pidfile=/var/run/puppet/

Add puppet server name to admin node line in /etc/hosts on puppetmaster and clients: admin-node puppet

Not having the /etc/hosts exactly correct seems to cause lots of delays. I tried changing the puppet server name to puppet-anything, and it wouldn’t work, so I settled for using the exact word “puppet” and relying on the /etc/hosts to pick up the correct puppet server, regardless of DNS.

Set puppetmaster autosign for certs: ( a great feature, key autosigning goes so quickly )

echo ‘*’ > /etc/puppet/autosign.conf

Install puppet on clients: in this example, using pdsh to the group “nodes”

pdsh -g nodes "yum -y install puppet"

pdsh -g nodes "puppetd --waitforcert 30 --test"

Then check on the puppetmaster:

 ls -laR /var/lib/puppet/ssl/ | grep pem

You should see your nodes certificates added, and also evidence of this in /var/log/.

Add puppet’s rpm dependencies to kickstart file:

# puppet.ks:
/sbin/chkconfig --level 345 puppet on
/bin/echo "$PUPPETIP puppet" >> /etc/hosts

Next article: managing user accounts, passwords, and ssh keys with Puppet