使用glusterfs 作為 kubernetes PersistentVolume PersistentVolumeClaim 持久化倉庫,高可用Rabbitmq,高可用mysql,高可用redis

來源:互聯網
上載者:User

標籤:lib   部分   sch   串連   sele   儲存   failure   shang   gluster   

glusterfs 怎麼叢集,網上一搜鋪天蓋地的

可利用這個特點做單節點高可用,因為K8S 哪怕節點宕機了 master 會在隨意一台節點把掛掉的複活

當然我是在自己的環境下跑,經過網路的glusterfs,資料轉送,等都有一定的效能損耗,對網路要求也特別高

小檔案儲存體效能也不高等問題.

這裡記錄一下rabbitmq 單機高可用情景,mysql,mongodb, redis 等,萬變不離其宗

事先建立好了 volume,卷名為 env-dev

隨便找個客戶機掛載

mount -t glusterfs 192.168.91.135:/env-dev /mnt/env/dev

預先建立需要的檔案夾

mkdir -p /mnt/env/dev/rabbitmq/mnesia

  

編寫 glusterfs endpoint

[[email protected]0 dev]# cat pv-ep.yaml apiVersion: v1kind: Endpointsmetadata:  name: glusterfs  namespace: env-devsubsets:- addresses:  - ip: 192.168.91.135  - ip: 192.168.91.136  ports:  - port: 49152    protocol: TCP---apiVersion: v1kind: Servicemetadata:  name: glusterfs  namespace: env-devspec:  ports:  - port: 49152    protocol: TCP    targetPort: 49152  sessionAffinity: None  type: ClusterIP

編寫 pv,注意這裡path 是 volume名稱 + 具體路徑

[[email protected]0 dev]# cat rabbitmq-pv.yaml apiVersion: v1kind: PersistentVolumemetadata:  name: rabbitmq-pv  labels:    type: glusterfsspec:  storageClassName: rabbitmq-dir  capacity:    storage: 3Gi  accessModes:    - ReadWriteMany  glusterfs:    endpoints: glusterfs    path: "env-dev/rabbitmq/mnesia"    readOnly: false

編寫pvc

[[email protected] dev]# cat rabbitmq-pvc.yaml apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: rabbitmq-pvc  namespace: env-devspec:  storageClassName: rabbitmq-dir  accessModes:    - ReadWriteMany  resources:    requests:      storage: 3Gi

建立endpoint pv pcv

kubectl apply -f pv-ep.yamlkubectl apply -f rabbitmq-pv.yaml

kubectl apply -f rabbitmq-pvc.yaml

 

使用方式,紅字部分

apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: ha-rabbitmq  namespace: env-devspec:  replicas: 1  selector:    matchLabels:      app: ha-rabbitmq  template:    metadata:      labels:        app: ha-rabbitmq    spec:      #hostNetwork: true      hostname: ha-rabbitmq      terminationGracePeriodSeconds: 60      containers:      - name: ha-rabbitmq        image: 192.168.91.137:5000/rabbitmq:3.7.7-management-alpine        securityContext:          privileged: true        env:        - name: "RABBITMQ_DEFAULT_USER"          value: "rabbit"        - name: "RABBITMQ_DEFAULT_PASS"          value: "rabbit"        ports:        - name: tcp          containerPort: 5672          hostPort: 5672        - name: http          containerPort: 15672          hostPort: 15672        livenessProbe:          failureThreshold: 3          httpGet:            path: /            port: 15672            scheme: HTTP          initialDelaySeconds: 20          periodSeconds: 10          successThreshold: 1          timeoutSeconds: 1        readinessProbe:          failureThreshold: 3          httpGet:            path: /            port: 15672            scheme: HTTP          initialDelaySeconds: 20          periodSeconds: 10          successThreshold: 1          timeoutSeconds: 1        volumeMounts:        - name: date          mountPath: /etc/localtime        - name: workdir          mountPath: "/var/lib/rabbitmq/mnesia"      volumes:      - name: date        hostPath:           path: /usr/share/zoneinfo/Asia/Shanghai      - name: workdir        persistentVolumeClaim:          claimName: rabbitmq-pvc---apiVersion: v1kind: Servicemetadata:  name: ha-rabbitmq  namespace: env-dev  labels:    app: ha-rabbitmqspec:  ports:  - name: tcp    port: 5672    targetPort: 5672  - name: http    port: 15672    targetPort: 15672

 

建立rabbitmq pod以及service.

kubectl create -f ha-rabbitmq.yaml

分配到了第一個節點,看看資料檔案

在管理頁面建立一個建立一個 virtual host

環境東西太多,這裡就不暴力關機了,直接刪除再建立

這次分配到節點0

看看剛才建立的virtual host

還健在

 

haproxy 代理

[[email protected] conf]# cat haproxy.cfg globalchroot /usr/localdaemonnbproc 1group nobodyuser nobodypidfile /haproxy.pid#ulimit-n 65536#spread-checks 5m#stats timeout 5m#stats maxconn 100########預設配置############defaultsmode tcpretries 3              #兩次串連失敗就認為是伺服器不可用,也可以通過後面設定option redispatch      #當serverId對應的伺服器掛掉後,強制定向到其他健康的伺服器option abortonclose    #當伺服器負載很高的時候,自動結束掉當前隊列處理比較久的連結maxconn 32000          #預設的最大串連數timeout connect 10s #連線逾時timeout client 8h #用戶端逾時timeout server 8h #伺服器逾時timeout check 10s    #心跳檢測逾時log 127.0.0.1 local0 err #[err warning info debug]########MariaDB配置#################listen mariadbbind 0.0.0.0:3306mode tcpbalance leastconnserver mariadb1 192.168.91.141:3306 check port 3306 inter 2s rise 1 fall 2 maxconn 1000server mariadb2 192.168.91.142:3306 check port 3306 inter 2s rise 1 fall 2 maxconn 1000server mariadb3 192.168.91.143:3306 check port 3306 inter 2s rise 1 fall 2 maxconn 1000#######RabbitMq配置#################listen rabbitmqbind 0.0.0.0:5672mode tcpbalance leastconnserver rabbitmq1 192.168.91.141:5672 check port 5672 inter 2s rise 1 fall 2 maxconn 1000server rabbitmq2 192.168.91.142:5672 check port 5672 inter 2s rise 1 fall 2 maxconn 1000server rabbitmq3 192.168.91.143:5672 check port 5672 inter 2s rise 1 fall 2 maxconn 1000#######Redis配置#################listen redisbind 0.0.0.0:6379mode tcpbalance leastconnserver redis1 192.168.91.141:6379 check port 6379 inter 2s rise 1 fall 2 maxconn 1000server redis2 192.168.91.142:6379 check port 6379 inter 2s rise 1 fall 2 maxconn 1000server redis3 192.168.91.143:6379 check port 6379 inter 2s rise 1 fall 2 maxconn 1000

 

 nginx 代理管理頁面

 

使用glusterfs 作為 kubernetes PersistentVolume PersistentVolumeClaim 持久化倉庫,高可用Rabbitmq,高可用mysql,高可用redis

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.