Use Glusterfs as Kubernetes persistentvolume persistentvolumeclaim persistence Warehouse, high availability RABBITMQ, high availability MySQL, highly available Redis

Source: Internet
Author: User
Tags rabbitmq haproxy glusterfs



Glusterfs how to cluster, online a search overwhelming



This feature can be used to make a single node high availability, because k8s even if the node down the master will be at random a node to put off the resurrection



Of course, I am running in my own environment, through the network of Glusterfs, data transmission, etc. have a certain performance loss, the network requirements are particularly high



Small file storage performance is also not high.



Here's a record of RABBITMQ high-availability scenarios, Mysql,mongodb, Redis, original aim






The volume was created in advance and the volume is named Env-dev



Just grab a client and mount it.


Mount 192.168. 91.135:/env-dev/mnt/env/dev


Pre-Create required folders


Mkdir-p/mnt/env/dev/rabbitmq/mnesia





Write Glusterfs Endpoint


[[email protected]0 dev]# cat pv-ep.yaml 
apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs namespace: env-dev
subsets: - addresses: - ip: 192.168.91.135 - ip: 192.168.91.136 ports: - port: 49152 protocol: TCP --- apiVersion: v1
kind: Service
metadata:
  name: glusterfs namespace: env-dev
spec:
  ports: - port: 49152 protocol: TCP
    targetPort: 49152 sessionAffinity: None
  type: ClusterIP


Write PV, note here path is volume name + specific path


[[email protected]0 dev]# cat rabbitmq-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: rabbitmq-pv
  labels:
    type: glusterfs
spec:
  storageClassName: rabbitmq-dir
  capacity:
    storage: 3Gi
  accessModes: - ReadWriteMany
  glusterfs:
    endpoints: glusterfs
    path: "env-dev/rabbitmq/mnesia" readOnly: false


Writing PVC


 
[[email protected] dev]# cat rabbitmq-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rabbitmq-pvc
  namespace: env-dev
spec:
  storageClassName: rabbitmq-dir
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 3Gi


Create Endpoint PV PCV


Kubectl apply-f pv-ep.yamlkubectl apply-f Rabbitmq-pv.yaml

Kubectl apply-f Rabbitmq-pvc.yaml











How to use the Scarlet Letter Section


 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ha-rabbitmq
  namespace: env-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ha-rabbitmq
  template:
    metadata:
      labels:
        app: ha-rabbitmq
    spec:
      #hostNetwork: true
      hostname: ha-rabbitmq
      terminationGracePeriodSeconds: 60
      containers:
      - name: ha-rabbitmq
        image: 192.168.91.137:5000/rabbitmq:3.7.7-management-alpine
        securityContext:
          privileged: true
        env:
        - name: "RABBITMQ_DEFAULT_USER"
          value: "rabbit"
        - name: "RABBITMQ_DEFAULT_PASS"
          value: "rabbit"
        ports:
        - name: tcp
          containerPort: 5672
          hostPort: 5672
        - name: http
          containerPort: 15672
          hostPort: 15672
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 15672
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 15672
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        volumeMounts:
        - name: date
          mountPath: /etc/localtime  - name: workdir
          mountPath: "/var/lib/rabbitmq/mnesia" volumes:
      - name: date
        hostPath: 
          path: /usr/share/zoneinfo/Asia/Shanghai  - name: workdir
        persistentVolumeClaim:
          claimName: rabbitmq-pvc ---

apiVersion: v1
kind: Service
metadata:
  name: ha-rabbitmq
  namespace: env-dev
  labels:
    app: ha-rabbitmq
spec:
  ports:
  - name: tcp
    port: 5672
    targetPort: 5672
  - name: http
    port: 15672
    targetPort: 15672





Create RABBITMQ pods and service.


Kubectl create-f Ha-rabbitmq.yaml





Assigned to the first node to see the data file






Create a virtual host on the admin page






Environment things too much, here will not be violent shutdown, directly delete and then create









Assign this time to node 0



Look at the virtual host that you just created






Still alive.






Haproxy Agent


[[email protected] conf] # cat haproxy.cfg
global
chroot / usr / local
daemon
nbproc 1
group nobody
user nobody
pidfile /haproxy.pid
# ulimit-n 65536
# spread-checks 5m
#stats timeout 5m
#stats maxconn 100

########default allocation############
defaults
mode tcp
retries 3 #If the connection fails twice, the server is considered unavailable. You can also set it later.
option redispatch #When the server corresponding to serverId dies, force redirection to other healthy servers
option abortonclose #When the server load is high, automatically terminate the current queue for a long time
maxconn 32000 #default maximum number of connections
timeout connect 10s #Connection timed out
timeout client 8h #Client timeout
timeout server 8h #Server timeout
timeout check 10s #Heartbeat detection timeout
log 127.0.0.1 local0 err # [err warning info debug]

######### MariaDB configuration ##################
listen mariadb
bind 0.0.0.0:3306
mode tcp
balance leastconn
server mariadb1 192.168.91.141:3306 check port 3306 inter 2s rise 1 fall 2 maxconn 1000
server mariadb2 192.168.91.142:3306 check port 3306 inter 2s rise 1 fall 2 maxconn 1000
server mariadb3 192.168.91.143:3306 check port 3306 inter 2s rise 1 fall 2 maxconn 1000

####### RabbitMqConfiguration ##################
listen rabbitmq
bind 0.0.0.0:5672
mode tcp
balance leastconn
server rabbitmq1 192.168.91.141:5672 check port 5672 inter 2s rise 1 fall 2 maxconn 1000
server rabbitmq2 192.168.91.142:5672 check port 5672 inter 2s rise 1 fall 2 maxconn 1000
server rabbitmq3 192.168.91.143:5672 check port 5672 inter 2s rise 1 fall 2 maxconn 1000

####### RedisConfig ##################
listen redis
bind 0.0.0.0:6379
mode tcp
balance leastconn
server redis1 192.168.91.141:6379 check port 6379 inter 2s rise 1 fall 2 maxconn 1000
server redis2 192.168.91.142:6379 check port 6379 inter 2s rise 1 fall 2 maxconn 1000
server redis3 192.168.91.143:6379 check port 6379 inter 2s rise 1 fall 2 maxconn 1000 





Nginx Agent Administration page









Use Glusterfs as Kubernetes persistentvolume persistentvolumeclaim persistence Warehouse, high availability RABBITMQ, high availability MySQL, highly available Redis


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.