OpenStack Controller HA test environment build record (iii)--Configuration Haproxy

Source: Internet
Author: User
Tags haproxy

Haproxy.cfg please back up and edit:
#/etc/haproxy/haproxy.cfg

Global
Chroot/var/lib/haproxy
Daemon
Group Haproxy
Maxconn 4000
Pidfile/var/run/haproxy.pid
User Haproxy

Defaults
Log Global
Maxconn 4000
Option Redispatch
Retries 3
Timeout Http-request 10s
Timeout Queue 1m
Timeout Connect 10s
Timeout Client 1m
Timeout Server 1m
Timeout Check 10s

Listen Dashboard_cluster
Bind10.0.0.10: 443
Balance Source
Option Tcpka
Option Httpchk
Option Tcplog
Server Controller2 10.0.0.12:443 check Inter rise 2 Fall 5
Server Controller3 10.0.0.13:443 check Inter rise 2 Fall 5


Each node is edited haproxy.cfg.
This is just a test haproxy, so only load balancing for dashboard, actually dashboard is not installed on the node.


Because the cluster resource needs to be bound to the VIP, the kernel parameters of each node need to be modified:
# echo ' net.ipv4.ip_nonlocal_bind = 1 ' >>/etc/sysctl.conf
# sysctl-p


Increase the Haproxy service resources in the cluster:
# CRM Configure primitive Haproxy lsb:haproxy op monitor interval= "30s"
"Lsb:haproxy" represents the Haproxy service.

ERROR:lsb:haproxy:got no meta-data, does this RA exist?
ERROR:lsb:haproxy:got no meta-data, does this RA exist?
ERROR:lsb:haproxy:no such resource agent
Still want to commit (y/n)? N

It seems that the CRM command does not recognize the "haproxy" service. See what services CRM is currently able to identify:
# CRM RA List LSB
Netconsole Network

Netconsole and network are located in the/ETC/RC.D/INIT.D directory, which is the only service script Centos7 by default, presumably to create a service script for haproxy in this directory (each node):
# Vi/etc/rc.d/init.d/haproxy

The contents are as follows:
#!/bin/bash

Case "$" in
Start
Systemctl Start Haproxy.service
;;
Stop
Systemctl Stop Haproxy.service
;;
Status
Systemctl status-l Haproxy.service
;;
Restart
Systemctl Restart Haproxy.service
;;
*)
Echo ' $ = Start|stop|status|restart '
;;
Esac


Remember to grant executable permissions:
chmod 755/etc/rc.d/init.d/haproxy


Reconfirm that the CRM command recognizes "Haproxy":
# CRM RA List LSB
Haproxy Netconsole Network
Already have haproxy, the service haproxy Status command is also available, try to create a Haproxy service resource again.


To view resource status:
# Crm_mon
Last Updated:tue Dec 8 11:28:35 2015
Last Change:tue Dec 8 11:28:28 2015
Stack:corosync
Current Dc:controller2 (167772172)-Partition with quorum
Version:1.1.12-a14efad
2 Nodes configured
2 Resources configured


Online: [Controller2 Controller3]

MYVIP (OCF::HEARTBEAT:IPADDR2): Started controller2
Haproxy (lsb:haproxy): Started Controller3

Currently Haproxy resources on the node Controller3, on the Controller3 view Haproxy service status, is active:
# Systemctl Status-l Haproxy.service


Defining Haproxy and VIPs must be run on the same node:
# CRM Configure colocation haproxy-with-public-ips Infinity:haproxy MYVIP

Define to take over VIP before starting Haproxy:
# CRM Configure Order HAPROXY-AFTER-VIP MANDATORY:MYVIP Haproxy

--------------------------------------------------------------------------------------------
Deploying Haproxy instances on the control nodes of OpenStack has become a consensus. The number of instances is preferably odd, such as 3, 5, and so on.

The full haproxy.cfg instance of the website is as follows:
Global
Chroot/var/lib/haproxy
Daemon
Group Haproxy
Maxconn 4000
Pidfile/var/run/haproxy.pid
User Haproxy

Defaults
Log Global
Maxconn 4000
Option Redispatch
Retries 3
Timeout Http-request 10s
Timeout Queue 1m
Timeout Connect 10s
Timeout Client 1m
Timeout Server 1m
Timeout Check 10s

Listen Dashboard_cluster
Bind <virtual ip>:443
Balance Source
Option Tcpka
Option Httpchk
Option Tcplog
Server Controller1 10.0.0.1:443 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:443 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:443 check Inter rise 2 Fall 5

Listen Galera_cluster
Bind <virtual ip>:3306
Balance Source
Option Httpchk
Server Controller1 10.0.0.4:3306 Check Port 9200 Inter 2 rise 5
Server Controller2 10.0.0.5:3306 Backup Check Port 9200 Inter 2 rise 5
Server Controller3 10.0.0.6:3306 Backup Check Port 9200 Inter 2 rise 5

Listen Glance_api_cluster
Bind <virtual ip>:9292
Balance Source
Option Tcpka
Option Httpchk
Option Tcplog
Server Controller1 10.0.0.1:9292 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:9292 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:9292 check Inter rise 2 Fall 5

Listen Glance_registry_cluster
Bind <virtual ip>:9191
Balance Source
Option Tcpka
Option Tcplog
Server Controller1 10.0.0.1:9191 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:9191 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:9191 check Inter rise 2 Fall 5

Listen Keystone_admin_cluster
Bind <virtual ip>:35357
Balance Source
Option Tcpka
Option Httpchk
Option Tcplog
Server Controller1 10.0.0.1:35357 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:35357 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:35357 check Inter rise 2 Fall 5

Listen Keystone_public_internal_cluster
Bind <virtual ip>:5000
Balance Source
Option Tcpka
Option Httpchk
Option Tcplog
Server Controller1 10.0.0.1:5000 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:5000 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:5000 check Inter rise 2 Fall 5

Listen Nova_ec2_api_cluster
Bind <virtual ip>:8773
Balance Source
Option Tcpka
Option Tcplog
Server Controller1 10.0.0.1:8773 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:8773 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:8773 check Inter rise 2 Fall 5

Listen Nova_compute_api_cluster
Bind <virtual ip>:8774
Balance Source
Option Tcpka
Option Httpchk
Option Tcplog
Server Controller1 10.0.0.1:8774 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:8774 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:8774 check Inter rise 2 Fall 5

Listen Nova_metadata_api_cluster
Bind <virtual ip>:8775
Balance Source
Option Tcpka
Option Tcplog
Server Controller1 10.0.0.1:8775 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:8775 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:8775 check Inter rise 2 Fall 5

Listen Cinder_api_cluster
Bind <virtual ip>:8776
Balance Source
Option Tcpka
Option Httpchk
Option Tcplog
Server Controller1 10.0.0.1:8776 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:8776 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:8776 check Inter rise 2 Fall 5

Listen Ceilometer_api_cluster
Bind <virtual ip>:8777
Balance Source
Option Tcpka
Option Httpchk
Option Tcplog
Server Controller1 10.0.0.1:8777 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:8777 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:8777 check Inter rise 2 Fall 5

Listen Spice_cluster
Bind <virtual ip>:6080
Balance Source
Option Tcpka
Option Tcplog
Server Controller1 10.0.0.1:6080 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:6080 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:6080 check Inter rise 2 Fall 5

Listen Neutron_api_cluster
Bind <virtual ip>:9696
Balance Source
Option Tcpka
Option Httpchk
Option Tcplog
Server Controller1 10.0.0.1:9696 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:9696 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:9696 check Inter rise 2 Fall 5

Listen Swift_proxy_cluster
Bind <virtual ip>:8080
Balance Source
Option Tcplog
Option Tcpka
Server Controller1 10.0.0.1:8080 check Inter rise 2 Fall 5
Server Controller2 10.0.0.2:8080 check Inter rise 2 Fall 5
Server Controller3 10.0.0.3:8080 check Inter rise 2 Fall 5

OpenStack Controller HA test environment build record (iii)--Configuration Haproxy

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.