Puppet centralized Configuration Management System
Puppet is a configuration management tool. It is typical that puppet is a C/S structure. Of course, there can be a lot of C here, so it can be said that it is a star structure. all puppet clients communicate with puppet on the same server. each puppet client connects to the server every 30 minutes, downloads the latest configuration file, and configures the server strictly according to the configuration file. after the configuration is complete, the puppet client can send a message to the server. if an error occurs, a message is also sent to the server. shows the data flow of a typical puppet configuration.
The biggest difference between stability puppet and other manual operators is that puppet configurations are stable. Therefore, you can execute puppet multiple times. Once you update your configuration file, puppet will modify the configuration of your machine according to the configuration file, usually every 30 minutes. puppet will make your system status consistent with that required by the configuration file. for example, the ssh service must be enabled in your configuration file. if the ssh service is disabled
Puppet will find this exception and then enable the ssh service. to make the system status consistent with the configuration file. puppet is like a magician, which will make your chaotic system converge to the State desired by the puppet configuration file.
You can use puppet to manage the entire lifecycle of the server, from initialization to retirement. unlike traditional Kickstart methods such as sun's Jumpstart or RedHat, puppet can keep the server up to date for years. you just need to configure them correctly at the beginning, and you don't have to worry about them any more. usually puppet users only need to install puppet on the machine and let them run it. Then the rest of the work is done by puppet.
Details and principles of puppet
1. The client Puppetd initiates an authentication request to the Master or uses a signed certificate.
2. The Master tells the Client that you are legal.
3. The client Puppetd calls Facter. Facter detects some host variables, such as host name, memory size, and IP address. Puppetd sends the information to the server through an SSL connection.
4. The Puppet Master on the server detects the host name of the client, finds the node configuration corresponding to manifest, and parses this part. The information sent by Facter can be used as a variable. The Code involved by node is parsed, and other codes not involved are not parsed. Parsing is divided into several stages. The first step is to check the syntax. If the syntax is correct, an error is reported. If the syntax is correct, the parsing continues. The parsing result generates an intermediate "pseudo code" (catelog ), then, send the pseudo code to the client.
5. The client receives and runs the pseudo code.
6. The client determines whether a File exists during execution. If yes, it initiates a request to the fileserver.
7. The client determines whether a Report is configured. If the Report is configured, the execution result is sent to the server.
8. The server writes the execution result of the client to the log and sends it to the reporting system.
Detailed configuration process:
System Environment: rhel6.5 selinux and iptables disabled
Sever: 172.25.254.1 vm1.example.com puppet master
Client: 172.25.254.2 vm2.example.com puppet agent
Client: 172.25.254.3 vm3.example.com puppet agent
Important: resolution and time synchronization are required between the server and all clients. Otherwise, verification fails.
Server:
Puppetmaster installation:
A. if the host can access the Internet
# Yum localinstall-y rubygems-1.3.7-1.el6.noarch.rpm
Add the following entries to the yum Repository:
[Puppet]
Name = puppet
Base url = http://yum.puppetlabs.com/el/6Server/products/x86_64/
Gpgcheck = 0
[Ruby]
Name = ruby
Base url = http://yum.puppetlabs.com/el/6Server/dependencies/x86_64/
Gpgcheck = 0
# Yum install puppet-server-y
B. If the host cannot access the Internet
Yes. download the following installation package:
[Root @ vm1 update] # ls
Facter-2.4.4-1.el6.x86_64.rpm ruby-augeas-0.4.1-3.el6.x86_64.rpm
Hiera-1.3.4-1.el6.noarch.rpm rubygem-json-1.5.5-3.el6.x86_64.rpm
Puppet-3.8.1-1.el6.noarch.rpm rubygems-1.3.7-5.el6.noarch.rpm
Puppet-dashboard-1.2.23-1.el6.noarch.rpm ruby-shadow-2.2.0-2.el6.x86_64.rpm
Puppet-server-3.8.1-1.el6.noarch.rpm
[Root @ vm1 update] # yum localinstall-y puppet-server-3.8.1-1.el6.noarch.rpm puppet-3.8.1-1.el6.noarch.rpm facter-2.4.4-1.el6.x86_64.rpm hiera-1.3.4-1.el6.noarch.rpm rubygem-json-1.5.5-3.el6.x86_64.rpm ruby *
/Etc/puppet configuration directory:
The organizational structure is as follows:
| -- Puppet. conf # Master configuration file. For details, run puppet -- genconfig.
| -- Fileserver. conf # file server configuration file
| -- Auth. conf # authentication configuration file
| -- Autosign. conf # automatically verify the configuration file
| -- Tagmail. conf # mail configuration file (send error messages)
| -- Manifests # file storage directory (puppet will first read the. PP file in this directory <site. pp>)
| -- Nodes
| Puppetclient. pp
| -- Site. pp # defines puppet-related variables and default configurations.
| -- Modules. pp # load the class module File (include syslog)
| -- Modules # definition module
| -- Syslog # syslog is used as an example.
| -- File
| -- Manifests
| -- Init. pp # class configuration
| -- Templates # module configuration directory
| -- Syslog. erb # erb Template
The first code executed by puppet is in/etc/puppet/manifest/site. pp. Therefore, this file must exist and other code must be called through this file.
[Root @ vm1 puppet] # touch/etc/puppet/manifest/site. pp # the puppet master cannot be started without this file. The configuration will be defined later.
[Root @ vm1 puppet] # service puppetmaster start # start puppet master
[Root @ vm1 puppet] # netstat-antlp | grep ruby
Tcp 0 0 0.0.0.0: 8140 0.0.0.0: * LISTEN 1863/ruby
Client:
You only need to install puppet. The installation method is the same as that on the server side:
A.
# Yum install puppet-y
B.
[Root @ vm2 ~] # Yum localinstall-y puppet-3.8.1-1.el6.noarch.rpm puppet-3.8.1-1.el6.noarch.rpm facter-2.4.4-1.el6.x86_64.rpm hiera-1.3.4-1.el6.noarch.rpm ruby *
Connect the puppet client to the puppet master:
[Root @ vm2 ~] # Puppet agent -- server vm1.example.com -- no-daemonize -- verbose
Info: Creating a new SSL key for vm2.example.com
Info: Caching certificate for ca
Info: Creating a new SSL certificate request for topics top2.example.com
Info: Certificate Request fingerprint (SHA256 ):
5C: 72: 77: D8: 27: DF: 5A: DF: 34: EF: 25: 97: 5A: CF: 25: 29: 9F: 58: 83: A2: 61: 57: D9: 20: 7B: 1E: C0: 36: 75: 9D:
FB: FC
The client sends a certificate verification request to the master and waits for the master to sign and return the certificate.
Parameter -- server specifies the name or address of the puppet master to be connected. By default, the host named "puppet" is connected.
To modify the default host connection, you can modify the PUPPET_SERVER = puppet option in the/etc/sysconfig/puppet file.
The parameter -- no-daemonize is the puppet client running on the foreground
Parameter -- verbose enables the client to output detailed logs
On the master side:
[Root @ vm1 puppet] # puppet cert list # display all certificates waiting for signature
"Vm2.example.com" (SHA256)
CD: BD: 13: D0: B8: 46: 07: F2: B7: AE: 00: C4: E6: E9: E1: A4: 92: F6: A4: F1: AB: F7: FF: 8D: BE: B0: B7: 90: E1:
7B: A8: C0
[Root @ vm1 puppet] # puppet cert sign vm2.example.com # signature certificate
Signed certificate request for vm2.example.com
Removing file Puppet: SSL: CertificateRequest vm2.example.com
'/Var/lib/puppet/ssl/ca/requests/vm2.example.com. pem'
To sign all certificates at the same time, run the following command:
[Root @ vm1 puppet] # puppet cert sign -- all
[Root @ vm1 puppet] # puppet cert clean topics top2.example.com # Delete the signature certificate
Two minutes after the certificate is signed, the following output is displayed on the agent:
Info: Caching certificate for vm2.example.com
Starting Puppet client version 3.0.0
Info: Caching certificate_revocation_list for ca
Info: Retrieving plugin
Info: Caching catalog for vm2.example.com
Info: Applying configuration version '20140901'
Finished catalog run in 0.13 seconds
Automatic Verification:
On the server side, edit the puppet. conf file:
[Root @ vm1 puppet] # vim/etc/puppet. conf
[Main]
Autosign = true
# Allow all clients to authenticate
Create the autosign. conf file in the/etc/puppet directory. The content is as follows:
# Vim/etc/puppet/autosign. conf
* .Example.com # indicates that all hosts in the example.com domain are allowed.
[Root @ vm1 puppet] # service puppetmaster reload
On the client side, you only need to execute:
# Puppet agent or # server puppet start
In practice, sometimes the host name of the client is modified, so you need to regenerate the certificate:
1) run the following command on the server: puppet cert -- clean topics top2.example.com # The original client host name you want to delete
2) Run rm-fr/var/lib/puppet/ssl/* on the client /*
3) puppet agent -- server vm1.example.com -- no-daemonize -- verbose
Puppet resource Definition
The following resources are defined in the/etc/puppet/manifest/site. pp file. If no node is specified, the resources will take effect for all verified clients.
1. Create a file
[Root @ vm1 puppet] # vim/etc/puppet/fileserver. conf Add the following lines:
[Files]
Path/etc/puppet/files
Allow * .example.com
[Root @ vm1 puppet] # service puppetmaster reload # restart the service
[Root @ vm1 manifests] # vim site. pp
File {"/mnt/testfile": # create a testfile file under/mnt
Source => "puppet: // files/passwd" # source: server/etc/puppet/files/passwd
Source => "/etc/passwd" # source: client/etc/passwd
}
2. Software Package Definition
Package {"httpd": ensure => present; # Install httpd
"Vsftpd": ensure => absent # uninstall vsftpd
}
3. Service Definition
Service {"httpd": ensure => running; # Start httpd
"Vsftpd": ensure => stopped # disable vsftpd
}
4. Group Definition
Group {"wonder": gid => 600}
5. User Defined
User {"wonder": # create a wonder user
Uid = & gt; 600,
Gid = & gt; 600,
Home => "/home/wonder ",
Shell => "/bin/bash ",
Password => westos
}
File {"/home/wonder ":
Owner => wonder,
Group => wonder,
Mode = & gt; 700,
Ensure => directory
}
File {"/home/wonder/. bash_profile ":
Source => "/etc/skel/. bash_profile ",
Owner => wonder,
Group => wonder
}
File {"/home/wonder/. bashrc ":
Source => "/etc/skel/. bashrc ",
Owner => wonder,
Group => wonder
}
User {"test": uid => 900, # create a test user
Home => "/home/test ",
Shell => "/bin/bash ",
Provider => useradd,
Managehome => true,
Ensure => present
}
Exec {"echo westos | passwd -- stdin test ":
Path => "/usr/bin:/usr/sbin:/bin ",
Onlyif => "id test"
}
6. Mount the File System
Mount {"/mnt": # the nfs service must be enabled on the 172.25.254.252 host.
Device => "172.25.254.252:/var/ftp/pub ",
Fstype => "nfs ",
Options => "defaults ",
Ensure => absent
}
Automatically mount the file system and synchronize the fstab file. If you need to unmount the file system, change it to absent.
7. crontab task
Cron {echo: # import the time To/tmp/echo every 10 minutes from.
Command => "/bin/echo '/bin/date'>/tmp/echo ",
User => root,
Hour => ['2-4'],
Minute => '*/10'
}
# The task is generated in the/var/spool/cron directory on the client.
Definition of different nodes:
1. Edit site. pp on puppetmaster
[Root @ vm1 puppet] # vim/etc/puppet/manifests/site. pp # Write
Import "nodes/*. pp"
2. Create a node File
[Root @ vm1 puppet] # vim/etc/puppet/manifests/nodes/vm2.pp
Node 'vm2 '{
Package {"httpd": ensure => present}
}
[Root @ vm1 puppet] # vim/etc/puppet/manifests/nodes/vm3.pp
Node 'vm3 '{
User {"test": uid = & gt; 900,
Home => "/home/test ",
Shell => "/bin/bash ",
Provider => useradd,
Managehome => true,
Ensure => present
}
Exec {"echo westos | passwd -- stdin test ":
Path => "/usr/bin:/usr/sbin:/bin ",
Onlyif => "id test"
}
}
Writing module:
[Root @ vm1 puppet] # mkdir-p/etc/puppet/modules/httpd/{files, manifests, templates}
[Root @ vm1 puppet] # cd/etc/puppet/modules/httpd/manifests
[Root @ vm1 manifests] # vim install. pp
Class httpd: install {
Package {"httpd ":
Ensure => present
}
}
[Root @ vm1 manifests] # vim config. pp
Class httpd: config {
File {"/etc/httpd/conf/httpd. conf ":
Ensure => present,
Source => "puppet: // modules/httpd. conf ",
# The actual path is in/etc/puppet/modules/httpd/files/httpd. conf.
Require => Class ["httpd: install"],
Policy => Class ["httpd: service"]
}
}
[Root @ vm1 manifests] # vim service. pp
Class httpd: service {
Service {"httpd ":
Ensure => running,
Require => Class ["httpd: install", "httpd: config"]
}
File {"/var/www/html/index.html": # Add a web Homepage
Source => "puppet: // files/index.html"
}
}
[Root @ vm1 manifests] # vim init. pp
Class httpd {
Include httpd: install, httpd: config, httpd: service
}
[Root @ vm1 manifests] # vim/etc/puppet/manifests/nodes/vm2.pp
Node 'vm2 '{
Include httpd
}
[Root @ vm1 manifests] # service puppetmaster reload
Template application (add virtual host configuration ):
The file is stored in the templates directory and ends with *. erb.
[Root @ vm1 manifests] # vim/etc/puppet/modules/httpd/manifests/init. pp # Add the following lines
Define httpd: vhost ($ domainname ){
# File {"/etc/httpd/conf/httpd. conf ":
#
Content => template ("httpd/httpd. conf. erb ")
#}
File {"/etc/httpd/conf. d/$ {domainname} _ vhost. conf ":
Content => template ("httpd/httpd_vhost.conf.erb "),
Require => Class ["httpd: install"],
Policy => Class ["httpd: service"]
}
File {"/var/www/$ domainname ":
Ensure => directory
}
File {"/var/www/$ domainname/index.html ":
Content => $ domainname
}
}
[Root @ vm1 manifests] # vim/etc/puppet/modules/httpd/templates/httpd_vhost.conf.erb
<VirtualHost *: 80>
ServerName <% = domainname %>
DocumentRoot/var/www/<% = domainname %>
ErrorLog logs/<% = domainname %> _ error. log
CustomLog logs/<% = domainname %> _ access. log common
</VirtualHost>
[Root @ vm1 manifests] # vi/etc/puppet/manifests/nodes/vm2.pp
Node 'vm2 '{
Include httpd
Httpd: vhost {'server2 .example.com ':
Domainname => "server2.example.com ",
}
}
Puppet dashboard installation (for managing puppet through web)
Dependency:
* Ruby 1.8.7
* RubyGems
* Rake> = 0.8.3
* MySQL server 5.x
* Ruby-MySQL bindings 2.7.x or 2.8.x
Installation Package puppet-dashboard-1.2.12-1.el6.noarch.rpm rubygem-rake-0.8.7-2.1.el6.noarch.rpm required
[Root @ vm1 manifests] # yum localinstall-y puppet-dashboard-1.2.12-1.el6.noarch.rpm rubygem-rake-0.8.7-2.1.el6.noarch.rpm ruby-mysql-2.8.2-1.el6.x86_64.rpm
[Root @ vm1 manifests] # yum install-y mysql-server
[Root @ vm1 manifests] #/etc/init. d/mysqld start
Configure the mysql database:
Mysql> create database dashboard_production character set utf8;
Query OK, 1 row affected (0.00 sec)
Mysql> create user 'dashboard '@ 'localhost' identified by 'westos ';
Query OK, 0 rows affected (0.01 sec)
Mysql> grant all privileges on dashboard_production. * TO 'dashboard' @ 'localhost ';
Query OK, 0 rows affected (0.00 sec)
Mysql>
# Cd/usr/share/puppet-dashboard/
[Root @ vm1 puppet-dashboard] # vim config/database. yml # Only the production environment configuration is left
Production:
Database: dashboard_production
Username: dashboard
Password: westos
Encoding: utf8
Adapter: mysql
[Root @ vm1 puppet-dashboard] # rake RAILS_ENV = production db: migrate
# Databases and tables required for creating a dashboard
The Default Time Zone of puppet-dashboard is incorrect and needs to be modified:
[Root @ vm1 puppet-dashboard] # vim/usr/share/puppet-dashboard/config/settings. yml
Time_zone: 'beijing'
Start the service:
[Root @ vm1 puppet-dashboard] # service puppet-dashboard start
Starting Puppet Dashboard: => Booting WEBrick
=> Rails 2.3.14 application starting on http: // 0.0.0.0: 3000
[OK]
[Root @ vm1 puppet-dashboard] # chmod 0666/usr/share/puppet-dashboard/log/production. log
[Root @ vm1 puppet-dashboard] # service puppet-dashboard-workers start
Real-time report summary:
Set the server:
Root @ vm1 ~] # Vim/etc/puppet. conf
[Main]
# Add the following two items
Reports = http
Reporturl = http: // 172.25.254.1: 3000/reports
Root @ vm1 ~] # Service puppetmaster reload
Set the client:
[Root @ vm1 puppet-dashboard] # vim/etc/puppet. conf # Add the following lines
[Agent]
Report = true
[Root @ vm1 puppet-dashboard] # service puppet reload
After the client has installed puppet and the authentication is complete, we can see the effect. How can we make it automatically synchronized with the server?
What about it? By default, how many minutes does it synchronize with the server? How can we modify the synchronization time? In this case, we need to configure the client:
(1) Configure puppet parameters and synchronization time:
[Root @ vm2 ~] # Vim/etc/sysconfig/puppet
PUPPET_SERVER = puppet.example.com # address of puppet master
PUPPET_PORT = 8140 # puppet listening port
PUPPET_LOG =/var/log/puppet. log # puppet local log
# PUPPET_EXTRA_OPTS = -- waitforcert = 500 [Default synchronization time. I will not modify this line of parameters here]
(2) After the default configuration is completed, the client will synchronize with the server half an hour. We can modify this time.
[Root @ vm2 ~] # Vim/etc/puppet. conf
[Agent]
Runinterval = 60 # synchronization with the server in 60 seconds
[Root @ vm2 ~] # Service puppet reload
Optimization of puppet uses nginx + passenger to replace WEBRickHTTP of puppet to process HTTPS requests and implement Load Balancing for puppet. See Nginx + passenger for Puppet Server Load balancer.
Puppet Learning Series:
Puppet Learning 1: Installation and simple instance applications
Puppet 2: simple module configuration and application
Research on three Backup Recovery solutions for Puppet agent
Register your Puppet node in a safer way
Deep understanding of Puppet syntax and working mechanism through SSH Configuration
Puppet uses Nginx multiple ports for Load Balancing
C/S mode instance of Puppet in CentOS (5 and 6)
For more information about Puppet, click here.
Puppet: click here
This article permanently updates the link address: