Understanding OpenStack Swift (1): OpenStack + three-node Swift cluster + HAProxy + UCARP installation and configuration

Source: Internet
Author: User
Tags haproxy openstack swift

This series of articles focuses on learning and researching OpenStack Swift, including environment building, principles, architecture, monitoring, and performance.

(1) OpenStack + Three-node Swift cluster + HAProxy + UCARP installation and configuration

(2) Swift Principles and architecture

(3) Swift monitoring

(4) Swift performance

To implement the system:

Characteristics:

    • Deploy all Swift services on each node using three peer physical nodes
    • Use open source Ucarp to control a VIP, which is bound to one of the three physical network cards.
    • Use open source HAProxy for load balancing
    • Open Swift Tempurl
1. Swift cluster installation and configuration

with the OpenStack Swift Kilo version, Swift is installed according to the community's official documentation, with little to spend. Features:

    • Physical node running Ubuntu operating system
    • Three physical nodes, all swift services are deployed on each node, so three nodes are equivalent.
    • Only one network is used. In fact, it is possible to separate the networks needed for the SWIFT cluster's data replication (replication).
    • Add two disks per node and use the XFS file system as the data disk for Swift.
    • Swift uses Keystone for user authentication.

Use the following configuration in admin-openrc.sh:

Export os_project_domain_id=defaultexport os_user_domain_id=defaultexport Os_ Project_Name=adminexport os_tenant_name=adminexport os_username=Adminexport os_ PASSWORD=***export os_auth_url=http://controller:35357/v3export Os_image_api_ version=2export os_volume_api_version=2export os_auth_version=3 

For common operations:

[Email protected]:~/s1# Swift upload conatiner10 aa[email protected]:~/s1# Swift list Conatiner10a[email Protected]:~/00334.404 MB/s][email protected]:~/  s1# Swift Delete conatiner10 aa[email protected]:~/s1# Swift list Conatiner10[email protected]:~/s1#
2. HAProxy Installation and Configuration

Install HAProxy on each node, and then modify the configuration file:

[Email protected]:~/s1# vi/etc/haproxy/haproxy.cfgGlobalLog/dev/Log local0 Log/dev/Log local1 Notice chroot/var/lib/haproxy User haproxy group Haproxy daemondefaults LogGlobalmode HTTP option httplog option Dontlognull contimeout theClitimeout50000Srvtimeout50000frontend localnodes Bind*:1002 #HAProxy listening on port 1002 mode http default_backend Swift-Cluster Maxconn -option Forwardforbackend Swift-cluster mode http balance roundrobin #使用轮询策略option Httpchk HEAD/healthcheck http/1.0option forwardfor # when mode is "http", the forwardfor is set so that the original source IP address is saved via the x-forward-for header
Server Proxy19.115.251.235:8080Weight5Check Inter 5s #节点1
Server Proxy29.115.251.233:8080Weight5Check Inter 5s #节点2
server Proxy3 9.115. 251.234:8080 weight 5 Check Inter 5s #节点3

Then run the command /usr/sbin/haproxy-f/etc/haproxy/haproxy.cfg to start the haproxy process.

3. Ucarp Configuration and Installation

Ucarp allows multiple hosts to share a virtual IP address to provide automatic failback, and when one of the hosts goes down, the other hosts automatically take over the service. Ucarp is the Linux implementation version of the CARP protocol (Universal Address Redundancy Protocol, which was first implemented on OpenBSD), and can also be ported to several other Unix platforms, Ucarp's official website: Http://www.ucarp.org/project/ucarp. The CARP protocol is characterized by its very low overhead, the transfer of information between hosts using encrypted data, and the need for no additional network links between redundant hosts.

Install Ucarp on three nodes and assign create three shell files (note that different IP addresses are required on each node):

[Email protected]:/etc/ ucarp# cat master.sh #用于启动 ucarp process, specifying VIP as 9.115.251.238#!/bin/Bash/usr/sbin/ucarp-i eth0-v +-P Gw22-a9.115.251.238-u/etc/ucarp/master-up.sh-d/etc/ucarp/master-down.sh-s9.115.251.235-P-B[email protected]: /etc/ucarp# Cat master- up.sh script that runs when the host node #当UCARP使得本节点做为 VIP  #!/bin/Bashgateway=9.115.251.1/SBIN/IP addr Add9.115.251.238/ -Dev eth0/bin/hostname swiftproxy/sbin/route adddefaultGW $GATEWAYservice httpd start[email protected]: /etc/ucarp# Cat master- down.sh #当 ucarp The script that runs when the node is no longer hosted as a VIP  #!/bin/Bashgateway=9.115.251.1/SBIN/IP Addr Del9.115.251.238/ -Dev eth0/bin/hostname swift1/sbin/route adddefaultGW $GATEWAYservice httpd Stop

In simple terms, UCAPR is similar to a simplified version of VRRP. It selects one of the three servers as the primary node, which serves the other two nodes as the standby node and promotes the master node when the primary node fails to provide the service. The script is also relatively simple, which is to add the VIP to the physical network card, modify the hostname and gateway.

Finally run master.sh to start the ucarp process.

4. Glance using Swift's configuration

(1) Create OpenStack endpoint, using Ucarp managed VIP and HAProxy managed port:

[Email protected]:~/s1# OpenStack Endpoint show 1f107e61c4024f0a9655fa7276a09c61+--------------+-------------------------------------------------+| Field | Value |+--------------+-------------------------------------------------+| Adminurl | http//9.115.251.238:1002 || Enabled | True | | ID | 1f107e61c4024f0a9655fa7276a09c61 | | InternalUrl | http//9.115.251.238:1002/v1/auth_% (tenant_id) s || Publicurl | http//9.115.251.238:1002/v1/auth_% (tenant_id) s || Region | Regionone | | service_id | 3281409aec0c4628a3360bf9403e45e8 | | service_name | Swift | | Service_type |Object-store |+--------------+-------------------------------------------------+

(2) Configuring the Glance API

Use the Glance API V2 and do not use Glance-registry V2. Use the Keystone V3 API. Modify the /etc/glance/glance-api.conf file,

==3= http://controller:35357/v3/swift_store_user =   1111= glance

One of the doubts here is that glance is the service account and not the end user account, and according to some articles, you need to configure the Reseller_prefix on the proxy node, but the functionality is not configured to work in this environment.

(3) Next, you can use the glance CLI to save the image to Swift.

5. Swift Tempurl

This was a relatively simple function of swift, but it took a long time to get out because of the problems that existed in the different documents (inconsistent, incomplete, not updated).

(1) configuration

Modify the/etc/swift/proxy-server.conf file to add Tempurl to the middleware in main pipeline. It is important to note that it must be preceded by Auth middleware, because these middleware are called in order. Then open the HTTP operation that is allowed to be used.

tempurl authtoken keystoneauth container-quotas account-quotas slo dlo proxy-logging proxy-server

[Filter:tempurl]
Use = Egg:swift#tempurl
# The methods allowed with Temp URLs.
Methods = GET HEAD PUT POST DELETE

In addition, you need to make sure that the [Filter:authtoken] section is set to Delay_auth_decision = True.

(2) Add Temp-url-key Meta, set it to a secret Key

[Email protected]:~/s1# Swift post-m"temp-url-key:1111" #设置[email protected]:~/s1# Swift stat #查看 account:auth_dea8b51d28bf41599e63464828102759 (in the third step below)  Containers:                        5Objects: OneBytes:416894908ContainersinchPolicy"policy-0":5ObjectsinchPolicy"policy-0": OneBytesinchPolicy"policy-0":416894908Meta temp-url-key:1111

(3) The Temp URL is generated. It is important to note that AUTH is not the account name, such as "admin", but Project ID. This can also use the Swift Stat command to view the value of its account.

3600 /v1/auth_dea8b51d28bf41599e63464828102759/container1/11111/v1/auth_ dea8b51d28bf41599e63464828102759/container1/1? temp_url_sig=fc9f80211aa5c6262f62ca4d57db65b25f1cef7a &temp_url_expires=1447087996

(4) Use Tempurl. You need to ensure the integrity of the URL, or you will get a 401 error.

" http://9.115.251.238:1002/v1/AUTH_dea8b51d28bf41599e63464828102759/container1/1?temp_url_sig= fc9f80211aa5c6262f62ca4d57db65b25f1cef7a&temp_url_expires=1447087996"222222222222

It is also important to note that each node needs (preferably) to use the UTC time zone and that the NTP service must be used to ensure time consistency.

(5) Debug when 401 error is obtained

By default, Swift logs are written to the/var/log/syslog file. Here are some debugging tips:

(a). Set Proxy-server's log leve to DEBUG

[app:proxy-server]# You can override the default log routing for this app here:set log_name = Proxy-serverset Log_level = DEBUG

[Filter:tempurl]
Set log_name = Tempurl

Set log_level = DEBUG

(b). Setting the number of workers to 0 makes debugging easy, with the default of 2.

0

(c) The logger output can be added to the/usr/lib/python2.7/dist-packages/swift/common/middleware/tempurl.py.

Then you can see the detailed logs for proxy server and Tempurl:

 nov 9 15:55:48  swift3 proxy-server : 9.115 . 251.219  9.115 . 251.233  1 %3ftemp_url_expires% 3d1447087996%26temp_url_sig%3dfc9f80211aa5c6262f62ca4d57db65b25f1cef7a http/1.0  200 -Curl/7.35 . 0 --12 -tx9ce884232d5a48bb9b5d8-005640c204-0.0261 --1447084548.853318930   1447084548.879395962 

Nov 9 15:55:48 swift3 tempurl: Hmac_vals is [' fc9f80211aa5c6262f62ca4d57db65b25f1cef7a '] (TXN: tx9ce884232d5a48bb9b5d8-005640c204)

If you use a non-UTC time zone, the two times in the Blue font section above will be inconsistent, causing problems.

More detailed Swift instructions will be analyzed in the next article.

Reference Documentation:

    • Implementing virtual IP failover with Ucarp

Understanding OpenStack Swift (1): OpenStack + three-node Swift cluster + HAProxy + UCARP installation and configuration

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.