The ingress MONGOs of a MongoDB shard cluster itself has no failover mechanism. The official recommendation is to deploy MONGOs and application servers together, and multiple application servers will deploy multiple MONGOs instances, which is inconvenient. You can also use LVS or haproxy to implement multiple mongos failover mechanisms, but be aware that client-side affinity features are used in affinity.
global
chroot /data/app_platform/haproxy/share/
log 127.0.0.1 local3 info
daemon
user haproxy
group haproxy
pidfile /var/run/haproxy.pid
nbproc 1
stats socket /tmp/haproxy level admin
stats maxconn 20
node master_loadbalance1
description lb1
maxconn 65536
nosplice
spread-checks 3
defaults
log global
mode tcp
option abortonclose
option allbackups
option tcpka
option redispatch
retries 3
timeout check 60s
timeout connect 600s
timeout queue 600s
timeout server 600s
timeout tarpit 60s
timeout client 600s
frontend mongos_pool 0.0.0.0:28018
mode tcp
maxconn 32768
no option dontlognull
option tcplog
log global
option log-separate-errors
default_backend mongos_pool
backend mongos_pool
mode tcp
balance source
default-server inter 2s fastinter 1s downinter 5s slowstart 60s rise 2 fall 5 weight 30
server gintama-xxx-mongos1 192.168.100.74:28018 check maxconn 2000
server gintama-xxx-mongos2 192.168.100.75:28018 check maxconn 2000
Note Use
Balance Source
This article is from the Linux SA John blog, so be sure to keep this source http://john88wang.blog.51cto.com/2165294/1620384
Using Haproxy as a MongoDB shard cluster MONGOs load Balancing