Puppet automated high-availability cluster deployment

Source: Internet
Author: User
Tags nginx server

Puppet automated high-availability cluster deployment

As the company's application demand increases, the number of servers is also increasing. As the number of servers increases, we will find that a puppetmaster has slow response, high pressure, and slow resolution, are there any optimization solutions? You can use Puppetmaster to configure multiple ports and use Nginx web Proxy, so that the puppetmaster's affordability can be increased by at least 10 times.

1. install and configure the mongrel service:

To use puppet multi-port configuration, you must specify the mongrel type. It is not installed by default and needs to be installed. Run the following command on the Puppetmaster server (provided that the epel RedHat source of the corresponding version has been installed ):

Rpm-Uvh http://mirrors.sohu.com/Fedora-epel/5/x86_64/epel-release-5-4.noarch.rpm

Yum install-y rubygem-mongrel

Add the following two lines at the end of the vi/etc/sysconfig/puppetmaster file,

Comment out the original same configuration item, representing the multi-port and mongrel types:

PUPPETMASTER_PORTS = (18140 18141 18142 18143 18144)

PUPPETMASTER_EXTRA_OPTS = "-- servertype = mongrel -- ssl_client_header = HTTP_X_SSL_SUBJECT"

Ii. install and configure the Nginx Server:

Cd/usr/src; wget-c http://nginx.org/download/nginx-1.2.6.tar.gz; tar xzf nginx-1.2.6.tgz & cd nginx-1.2.6 &&. /configure -- prefix =/usr/local/nginx -- with-http_ssl_module & make install

Part of the Nginx. conf configuration file:

Upstream puppetmaster {

Server 127.0.0.1: 18140;

Server 127.0.0.1: 18141;

Server 127.0.0.1: 18142;

Server 127.0.0.1: 18143;

Server 127.0.0.1: 18144;

}

Server {

Listen 8140;

Root/etc/puppet;

Ssl on;

Ssl_session_timeout 5 m;

# The following is the Puppetmaster server certificate address.

Ssl_certificate/var/lib/puppet/ssl/certs/192-9-117-162-usr/local.com. pem;

Ssl_certificate_key/var/lib/puppet/ssl/private_keys/192-9-117-162-usr/local.com. pem;

Ssl_client_certificate/var/lib/puppet/ssl/ca/ca_crt.pem;

Ssl_crl/var/lib/puppet/ssl/ca/ca_crl.pem;

Ssl_verify_client optional;

# File sections

Location/production/file_content/files /{

Types {}

Default_type/usr/locallication/x-raw;

# It is mainly used to push files and define the file alias path

Alias/etc/puppet/files /;

}

# Modules files sections

Location ~ /Production/file_content/modules/. + /{

Root/etc/puppet/modules;

Types {}

Default_type usr/locallication/x-raw;

Rewrite ^/production/file_content/modules/(. +)/(. +) $/$1/files/$2 break;

}

Location /{

# Set to jump to puppetmaster Server Load balancer

Proxy_pass http: // puppetmaster;

Proxy_redirect off;

Proxy_set_header Host $ host;

Proxy_set_header X-Real-IP $ remote_addr;

Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;

Proxy_set_header X-Client-Verify $ ssl_client_verify;

Proxy_set_header X-SSL-Subject $ ssl_client_s_dn;

Proxy_set_header X-SSL-Issuer $ ssl_client_ I _dn;

Proxy_buffer_size 10 m;

Proxy_buffers 1024 10 m;

Proxy_busy_buffers_size 10 m;

Proxy_temp_file_write_size 10 m;

Proxy_read_timeout 120;

}

}

Restart the server/etc/init. d/puppetmaster restart, restart nginx WEB, and test it on the client.

Iii. Deployment of multiple Puppet master nodes:

If you configure multiple master clusters, you can share the 33.10 master1 certificate, and then mount the certificate to another master. The configuration of 192.168.33.10 nfs is as follows:

Vi/etc/exports content:

/Var/lib/puppet/* (no_root_squash, rw, sync)

Run the following command on 192.168.33.11master2:

Mount-t nfs 192.168.33.10:/var/lib/puppet/ssl

Restart the master2puppetmaster service.

To add a multi-port service, you must install the Service as follows:

Yum install-y rubygem-mongrel

Modify master2/etc/sysconfig/puppet. conf at the same time:

Add bind address = 0.0.0.0 to the [main] section, and set the listening port to 0.0.0.0.

In this way, you can use upstream in master1 nginx, and the final master1 nginx. conf upstream configuration is as follows:

Upstreampuppetmaster {

Server 127.0.0.1: 18140;

Server 127.0.0.1: 18141;

Server 127.0.0.1: 18142;

Server 127.0.0.1: 18143;

Server 127.0.0.1: 18144;

# Config add 2014-10-10

Server 192.168.33.11: 18140;

Server 192.168.33.11: 18141;

Server 192.168.33.11: 18142;

Server 192.168.33.11: 18143;

Server 192.168.33.11: 18144;

}


It is not difficult to build a keepalived high-availability cluster. For more exciting articles, please stay tuned!

Puppet Learning Series:

Puppet Learning 1: Installation and simple instance applications

Puppet 2: simple module configuration and application

Research on three Backup Recovery solutions for Puppet agent
Register your Puppet node in a safer way
Deep understanding of Puppet syntax and working mechanism through SSH Configuration
Puppet uses Nginx multiple ports for Load Balancing
Puppet centralized Configuration Management System details
C/S mode instance of Puppet in CentOS (5 and 6)

For more information about Puppet, click here.
Puppet: click here

This article permanently updates the link address:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.