Nginx reverse proxy, load balancing, Redis session sharing, keepalived high availability

Source: Internet
Author: User
Tags chmod openssl openssl version iptables nginx host nginx server nginx reverse proxy redis server


Resources to use:nginx reverse proxy



Nginx Primary server One, Nginx standby server, use keepalived for downtime switching.



Tomcat server two,nginx proxy_pass  by Nginx reverse proxy and load balancing, here can build server cluster.



Redis server, used for session separation and sharing.

nginx proxy


Nginx Primary server: 192.168.50.133



Nginx Standby server: 192.168.50.135



Tomcat Project Server 1:192.168.50.137



Tomcat Project Server 2:192.168.50.139



Redis Server: 192.168.50.140



Note that you need to configure firewall rules when accessing, or shut down the firewall



The first common installation:nginx proxy manager



Total need to simulate five servers, using VMware, all the use of centos6.5 64-bit,nginx reverse proxy docker  the server installed all the JDK, I use the jdk1.8.



1. Install VMware virtual machine, install Linux system, use centOS6.5 64 bit, install Linux command line tool, upload file tool, use Securecrt,securefx here. Installation tutorials no longer repeat, Baidu a lot of ...



You have a problem with this step: www.baidu.com.






2, install the JDK on Linux:



Install JDK: Uninstall OPENJDK version, upload decompression JDK, nginx reverse proxy example configure environment variable----Reference: http://jingyan.baidu.com/article/ab0b56308966acc15afa7d18.html



First, Nginx reverse proxy and load balance:jwilder nginx proxy



Schema diagram:






Three servers, one Nginx server, two servers for the official deployment project are needed: 192.168.50.133 primary Nginx and 192.168.50.137,192.168.50.139 two tomcat servers are selected



First install Tomcat on two servers: This is also simple, not much to say



Install Tomcat: Upload decompression can be used, bin directory startup.sh boot, shutdown.sh closed



Configure Firewall port: Vim/etc/sysconfig/iptables Edit, open 8080 port, 80 port, etc. some common port, of course, some ports are required to be configured open, do not recommend the firewall shutdown






After editing the service iptables Restart reload the firewall configuration






If it is their own test to configure trouble, turn off the firewall: service iptables after the Stop restart firewall open, that is, in the active state, the full shutdown and then use Chkconfig iptables off, that is, after the reboot will also shut down the firewall, Note that sometimes the service is up, but access is wrong, it may be a firewall problem.



Start Tomcat access: 192.168.50.137:8080,192.168.50.139:8080, opening Tomcat home is successful.



Then write the test project, deploy to two Tomcat, Eclipse new Web project, project name TestProject, create a new JSP page under WebApp for index.jsp, add the following






Move the access order <welcome-file>index.jsp</welcome-file> up to the first access in Web.xml in the project



Then the right arrow is exported to the war package, Testproject.war, and the war package is uploaded to the Tomcat WebApps of the two servers






Then modify Tomcat's Server.xml file, in the Tomcat directory: You can use notepad++ plug-in nppftp directly connected to Linux, and then use notepad++ Modify File Oh, save remember to use UTF-8 no BOM format, Specifically to Baidu Bar, haha



Modify the Engine tab to add Jvmroute to identify which server the Nginx is accessing tomcat,137 server identified as 137server1,139 server identity 139server2






In two Tomcat server.xml files, add the host tag: <context path= "" docbase= "TestProject"/>,path identity access Path, docBase as the project name, which indicates access to the project






At this point, restart Tomcat, Access 192.168.50.137:8080,192.168.50.139:8080, display index.jsp content: Two server access appears as follows



At this point, two Tomcat servers are completed.



Install Nginx on the Nginx host 192.168.50.133:



Install GCC using the Yum command first, install Pcre,zlib,openssl:



Yum install-y gcc yum install-y pcre pcre-devel yum install-y zlib zlib-devel yum install-y OpenSSL openssl-devel Plai N



Create a new NGINX-SRC directory in the/usr/local/directory, put nginx-1.8.0.tar.gz here, unzip



TAR-ZXVF nginx-1.8.0.tar.gz



Enter the unpacked directory






To execute the command sequentially:



./configure Make Mkae Install



At this point Nginx installed, the installation directory is/usr/local/nginx,nginx default occupancy 80 port






Where, Sbin directory for Nginx execute command, conf directory nginx.conf for default loaded configuration file



Start Nginx:



./sbin/nginx



Close Nginx:



./sbin/nginx-s Stop



Access to 192.168.50.133:80 Nginx after starting Nginx: Show Nginx Welcome page






At this point, the Nginx installation is complete.



3. Reverse proxy and load balanced configuration



There are two servers available, One for 192.168 50.137, one for 192.168.50.139, each on the server has a tomcat, the port is 8080, on the 192.168.50.133 has nginx, after configuration Nginx, when Access 192.168.50.133 : 80 o'clock, you can access the 192.168.50.137:8080,192.168.50.139:8080 random one, at this time 192.168.50.133:80 is nginx listening, when there is a request, the agent to 192.168.50.137 : 8080,192.168.50.139:8080 random One can, that is, Nginx reverse proxy function, at the same time can be forwarded through the Nginx request, to ensure that a portal, all requests forwarded to the two servers also reduce the load pressure on any one, when there are a large number of requests , can build a large number of servers, on the Portal proxy server using Nginx for forwarding, that is, load balancing function.



Configuration is the configuration of the Nginx installation directory in the Conf directory of the nginx.conf file can be: specific configuration as follows, the focus is the red part















Start two tomcat, reboot Nginx:



Access to 192.168.50.133:80 will randomly access one of 192.168.50.137:8080 and 192.168.50.139:8080. (problem: Each refresh Nginx server address SessionID will change, session can not be shared.) )



Nginx Polling Policy:



When Nginx load is balanced to multiple servers, a polling policy is used by default:



Common policies:



1. Polling



Each request is assigned to a different back-end server in chronological order, and can be automatically removed if the backend server is down.



2, Weight



Specifies polling odds, weight and access ratios are proportional to the performance of the backend server, the higher the number of hits.



For example, the poll probability is 2:1



Upstream Bakend {



Server 192.168.0.14 weight=2;



Server 192.168.0.15 weight=1;



}



2, Ip_hash



Each request is allocated according to the hash result of the access IP, so that each visitor has a fixed access to a back-end server that resolves the session's problem.



For example:



Upstream Bakend {



Ip_hash;



Server 192.168.0.14:88;



Server 192.168.0.15:80;



}



Other strategies can query learning, Nginx There are many other configurable items, static resource caching, redirection, etc., want to go deep in children's shoes please learn



Nginx Configuration Detailed: http://blog.csdn.net/tjcyjd/article/details/50695922



Practical problem: Although solved, but not very understanding, record



Where 192.168.50.133:80 is an extranet map, the extranet 55.125.55.55:5555 mapped to 192.168.50.133:80, where 55.125.55.55:5555 access is used to map to 192.168.50.133 : 80, then will be forwarded to 192.168.50.137:8080 or 192.168.50.139:8080, but at this time there are pictures, js,css and other static files can not access the situation, through two methods to solve.



<1>. Mapping non-80 ports



Let 55.125.55.55:5555 map 192.168.50.133 80 ports, such as 55.125.55.55:5555 mapping 192.168.50.133:5555, and then configure the following in the Nginx configuration file. Notice the red enlargement: this place doesn't understand.









At this point, access 55.125.55.55:5555, map to 192.168.50.133:5555, and then forward to 192.168.50.137:8080 or 192.168.50.139:8080, at which time static files can be accessed.



<2> Use a domain name to forward using Nginx on an extranet server



The 55.125.55.55 bound domain name is test.baidubaidu.com, at which point the nginx is used on the 55.125.55.55 server.



... location/{#加入判断, if the domain name is test.baidubaidu.com, forward to 192.168.50.133:80, and then forward, note that there is no test here, it seems to be so written, $ Hostname is a nginx variable, you can get the domain name if ($hostname = "test.baidubaidu.com") {Proxy_pass http://192.168.50.133:80;} #proxy_redirect Off #非80端口使用, the purpose is to send the user information received by proxy server to the real server, I do not understand Proxy_set_header Host $host; Proxy_set_header X-real-ip $remote _addr; Proxy_set_header x-forwarded-for$proxy_add_x_forwarded_for; Client_max_body_size 10m; Client_body_buffer_size 128k; Proxy_connect_timeout 300; Proxy_send_timeout 300; Proxy_read_timeout 300; Proxy_buffer_size 4k; Proxy_buffers 432k; Proxy_busy_buffers_size 64k; Proxy_temp_file_write_size 64k; Add_header Access-control-allow-origin *; }........



Above that is Nginx reverse proxy and load balanced introduction, after this study, found Nginx is indeed profound, a configuration file I do not want to ...



Second, session sharing problem:



Because Nginx is a random allocation request, suppose a user logs on to the site when the login is assigned to the 192.168.50.137:8080, and then the login operation, the server will have the user login session information, Then after landing to redirect to the home page or personal center, when assigned to the 192.168.50.139:8080, then this server does not have the user session information, so it will become not logged in state, Therefore, due to nginx load balance will lead to session sharing problems.



Workaround:



1, Nginx provides a ip_hash strategy, you can keep the user IP hash value calculation fixed allocation to a server, and then as long as the IP will remain assigned to the server, to ensure that users access to the same server, then the session problem does not exist. This is also a way to resolve session sharing, also known as viscous session. However, if a tomcat server is dead, the session will also be lost. So the better solution is to extract the session.



2, the session exists Memcache or Redis, in this way to sync session, the session is extracted, put into the memory level of the database, solve the session sharing problem, while reading speed is very fast.






In this example:






Redis Resolve session Sharing:



Build the Redis,redis default port on the Redis server 192.168.50.140 to 6379



Redis Build:



Redis relies on GCC to install first:



Yum Install-y gcc-c++



Download Redis, I use the redis-3.2.1.tar.gz, upload to linux/usr/local/redis-src/, decompression



Enter the unpacked directory redis-3.2.1, execute make command to compile



Install to directory/usr/local/redis



Perform:



Make Prefix=/usr/local/redis Install



After the installation is complete, copy the Redis configuration file to the installation directory, redis.conf is the Redis configuration file, redis.conf in the Redis source directory, port default 6379.



To execute a command:



cp/usr/local/redis-src/redis-3.2.1/redis.conf/usr/local/redis/



Starting and shutting down Redis in the Redis installation directory:



Start:



./bin/redis-server./redis.conf






This boot is called front-end boot, must remain in the current window, if CTRL + C exit, then Redis also quit, do not recommend the use of



Then the back end starts:



First modify the value of the daemonize in redis.conf, open to see the default is no, modify to daemonize Yes, start. You can also modify the Redis default port 6379 to a different value in the configuration file.






Close Redis:



./BIN/REDIS-CLI shutdown



At this point, the Redis server is completed.



Tomcat and Redis integration realize session sharing:



Environment for TOMCAT7 + jdk1.6 words:



In the Tomcat directory of all servers that need to share the session:



Lib directory to add the following three jar packages, note that the best version of the same, otherwise extremely prone to errors, the test below is available:






Conf Directory content.xml add: Configure Redis Services






Environment for TOMCAT7 + jdk1.7 or 1.8 words:



In the Tomcat directory of all servers that need to share the session:



The Lib directory adds the following three jar packages to the test pass:






Conf Directory content.xml add: Configure Redis Services






According to my test, it was jkd1.8+tomcat7, adding jar packs to 137 and 1392 Tomcat and configuring them as follows:



Upload Jar Pack






Modify Content.xml






Start the Redis service, restart all tomcat, start Nginx, refresh the Nginx page, two Tomcat pages can see SessionID value unchanged, turn off a tomcat,nginx SessionID unchanged, indicating that the session is shared.



Problem:



It is possible at this time access will be an error, Redis inaccessible, this is due to Redis security mechanism, the default is only 127.0 0.1 to access, in the redis.conf can find bind 127.0.0.1, you can change this IP to visitors IP,



If you have more than one visitor, you can also comment bind 127.0.0.1, and then find Protected-mode in the configuration file, modify Protected-mode Yes to Protected-mode No to turn off Redis protection mode



Detailed reference to this: http://www.cnblogs.com/liusxg/p/5712493.html



After Daniel's advice: Add two points of attention:



1, in accordance with the configuration, the use of Redis database, into the session of the object must be implemented Java.io.Serializable interface, the use of memcache can not implement serializable interface



The reason: It is also easy to understand that because the tool class used by Tomcat to use the session placement Redis is stored in the JDK serialization mode, Session.setattribute (String key, Object value), Storage Object Type



Object into the Redis can also be taken out, can only be serialized for storage, and then to take out the deserialization.



So any object that we store in the session must implement the serialization interface.



2, in accordance with the configuration, using Redis to do session storage space, the Web application of the Session-time time unit will become [seconds], rather than the original [cent]



The reason: Because the tool class used in Tomcat to place the session in the Redis is used to convert the Tomcat container time when it is stored,



Setting the expiration time in Redis is in seconds, and a command called expire can set the Redis key value expiration time. So in the Context.xml configuration file we need to set the session expiration time (the default is 60 seconds, with 1800 or 30 minutes), which is important.



Please note that ....



Context.xml Configuration Instructions:






Third, keepalived high availability:



Schema diagram:






The wrong thing about the picture, it's ugly.



According to the top of the way, has been set up from the Nginx master to the server of this line, so the same, using Nginx standby 192.168.50.135 to build Nginx, is also a proxy 192.168.137 and 1392 servers. Once you've done it, it's easy.



Install Nginx on 192.168.50.135, configure Nginx configuration can, no longer repeat, nginx standby configuration is as follows:



The configuration is the same as the top, but the IP of the place where the monitor is.






So now is equivalent to two sets of Nginx, the proxy server is the same, why two sets.



Assuming there is only one nginx, this Nginx server is dead. So what to do.



So you need a backup nginx.



Under normal circumstances, the main nginx as a reverse proxy server, assuming that the Nginx server is dead, can immediately switch to the backup machine, to ensure that users can access, and then the operator of the main Nginx server failure to repair, but also can automatically switch to the main nginx to provide services. By keepalived to monitor two servers, when normal, Bind the Nginx Primary server IP (192.168.50.133) to a keepalived defined virtual IP (I set to 192.168.50.88) that can access Nginx via this virtual IP, and then standby (192.168.50.135) Nothing, just every little thing. Time (set to 1 seconds) keepalived will tell the standby, you do not care, I still alive, if suddenly the host died, then more than a second standby machine did not receive the message, then the standby machine immediately take over the host, keeplived will be virtual IP binding to the standby, the website continues to provide services.



Suddenly the host is resurrected (operation of the maintenance personnel troubleshooting), then the standby machine will receive the message of the live, so the management power back to the host, virtual IP and tied to the host, is probably such a process, personal understanding.






Install the keepalived on both Nginx servers (main standby) First:



Download: The use of RPM installation, is to distinguish 32, 64-bit, don't make a mistake oh



keepalived-1.2.7-3.el6.x86_64.rpm



openssl-1.0.1e-30.el6_6.4.x86_64.rpm



Requirements must be openssl-1.0.1e or above, if the version has been met (because the installation of Nginx has been installed OpenSSL, use Yum installation should be compliant), no longer install OpenSSL, use rpm-q OpenSSL To view the current OpenSSL version, I'm already 1.0.1e 48, so I'm not installing it.



Upload two RPM packages to two nginx servers, go to upload directory, run the following command to install:--nodeps is to ignore the dependency package, of course, it is best to put on the dependency package, remove--nodeps can see the error, need which dependent package



If you need to install OpenSSL



RPM–UVH--nodeps./openssl-1.0.1e-30.el6_6 4.x86_64.rpm



Install keepalived:



RPM-UVH--nodeps./keepalived-1.2.7-3.el6.x86_64.rpm



After installation, there is a file in the/etc/keepalived/directory keepalived.conf that is the core configuration file of Keepalived server:



Focus: Keepalived configuration, the top part of the configuration file according to the following configuration on the line, the contents of the back of the configuration file can not be used, not to study the other parts



First configure the keepalived of the host 192.168.50.133, and configure it by the bottom









keepalived configuration of the standby machine 192.168.50.135:



Standby configuration Note: You need to modify state to backup, priority is lower than Master, virtual_router_id and master values are the same






Purple, keepalived on the configuration completed.



keeplived Start shutdown Command:



Service keepalived start service keepalived stop service keepalived start service keepalived stop



Start the two nginx, start the host keepalived, start the standby keepalived service.



At this time, Nginx host in the provision of services, standby is idle, virtual IP is 192.168.50.88, the host and standby to use the command



IP addr



Can be found:



Host: You can see that 192.168.50.133 with virtual ip192.168.50.88, enter 192.168.50.88 in the browser to access the main nginx192.168.50.133. Then forward to the Tomcat server






Browser access to virtual ip:192.168.50.88, the effect is as follows






Standby: IP addr Command execution: You can see that the standby Nginx does not bind the virtual IP



The above is the initial state of the situation, but also the normal service situation.



Now testing high availability, assuming that the host Nginx server hangs, analog for the shutdown Nginx host or will keepalived service stop, then the host on Keepalived died can not tell the standby machine itself alive, and standby more than 1 seconds did not receive the host to their own message, Immediately take over the virtual IP, while configuring the switch in the configuration file to send mail, when the development team received the message that the host is dead, immediately to troubleshoot the host.



Stop the keepalived service on the host, service keepalived stop, and then view the virtual IP bindings.



Host hung: You can see the virtual IP is not tied to the host






Standby condition: The virtual IP has been bound to the standby machine, at this time the host although hung, but switch to the standby (find fault and switch the difference between the maximum is 1 seconds), virtual IP also tied to the standby, access to virtual IP, will request the standby Nginx and then forward to the Web server to achieve high availability.






Operation and maintenance personnel received the message to troubleshoot the host, after the fix (simulation for keepalived service startup), when the host told the Standby, I live, so the standby will be management power to the host (switch to host Nginx service):



Host Keepalived service startup, that is, after the host maintenance: You can see, virtual IP and automatically tied to the host






Standby situation, after the mainframe alive, standby power transfer management, virtual IP switch to the host, the standby does not bind the virtual IP, seemingly start keepalived service and can not immediately cut back, it should be a service need a little time, but does not affect, this time or the standby machine binding virtual IP






This is the keepalived high availability simulation.



Note the problem:



Host Hung, the host nginx recovery, must be Nginx also started, otherwise, even if the virtual IP switch to the host, but the host Nginx did not have to be forwarded. So to put the Nginx boot to add in the boot.



Four, the Nginx service starts from:



In the Linux system/etc/init.d/directory to create nginx files, use the following command: (VIM command not to learn it yourself haha)



Vi/etc/init.d/nginx



Get the following into the file: Note that the Red section is modified to your path, nginxd value is the Nginx path to start Nginx, nginx_config value is Nginx profile nginx.conf path, nginx_ The PID value is the path of the Nginx.pid, if installed according to my method, is in the Nginx installation directory logs inside the



#!/bin/bash



# Nginx Startup for the Nginx HTTP Server



# It is v.0.0.2 version.



# Chkconfig:-85 15



# Deion:nginx is a high-performance Web and proxy server. #



It has a lot of features, but it ' s not for everyone.



# Processname:nginx



# Pidfile:/usr/local/nginx/logs/nginx.pid



# config:/usr/local/nginx/conf/nginx.conf n



Ginxd=/usr/local/nginx/sbin/nginx nginx_config=/usr/local/nginx/conf/nginx.conf nginx_pid=/usr/local/nginx/logs/ Nginx.pid retval= 0



Prog= "Nginx" # Source function library.



. /etc/rc.d/init.d/functions # Source Networking configuration.



. /etc/sysconfig/network # Check that networking are up.



[${networking} = "No"] && exit0



[x $nginxd] | | Exit0



# Start Nginx daemons functions.



Start () {if[-e $nginx _pid];then echo nginx already running ... "Exit1fi echo-n $" Starting $prog: "Daemon $nginxd-C ${ngi Nx_config} retval=$? echo[$RETVAL = 0] && touch/var/lock/subsys/nginx return$retval} # Stop Nginx daemons functions. Stop () {echo-n $ "stopping $prog:" Killproc $nginxdRETVAL =$? echo[$RETVAL = 0] && rm-f/var/lock/subsys/nginx/var/run/nginx.pid} # Reload Nginx service functions.



Reload () {echo-n $ "reloading $prog:" #kill-hup ' cat ${nginx_pid} ' Killproc $nginxd-hup retval=$? echo} # We were called.



Case "in Start" start;; stop) stop;; reload) reload;; restart) stop start;; Status $progRETVAL =$?;; *) echo$ "Usage: $prog {start|stop|restart|reload|status|help}" Exit1esac Exit$retval



Then set access to the file: Execute the following command, meaning that all users can access



chmod A+x/etc/init.d/nginx



Finally the Ngix is added to the rc.local file, so that when the boot Nginx on the default start



Vi/etc/rc.local



Add to



/etc/init.d/nginx start






Save and exit, the next reboot will take effect, nginx boot. The test is correct.



Iv. resolving nginx processes and keepalived problems:



Keepalived is by detecting whether the keepalived process exists to determine whether the server is down or not, if the keepalived process is in, but the nginx process is gone, then keepalived will not be prepared to switch. Because is Nginx hangs, then cannot do the proxy, keepalived also does not switch to the standby machine.



So has been testing whether nginx still, if not, so let keepalived also stop, die and die together.



Note: Just want to do on the host on the line, the standby machine does not need to detect nginx, because the basic is the host in service.



Solve: Write a script to monitor the existence of nginx process, if Nginx does not exist will keepalived process kill.



Note: keepalived do not need to boot, if the boot, if the keepalived than Nginx faster start, script detection will keepalived stop, so no need, only need to nginx boot, Start the host manually after the start of the Keepalived service can be started.



Write the Nginx process detection script (CHECK_NGINX_DEAD.SH) on the main nginx to create the script in the keepalived configuration file directory:



vi/etc/keepalived/check_nginx_dead.sh



Get the following content into the script file, as follows:



#!/bin/bash# If there is no nginx in the process, kill the keepalived process a= ' ps-c nginx--no-header |wc-l ' # see if Nginx process assigns values to variable aif[$A-eq 0];then # # If no process is worth zero service keepalived Stop # # then end keepalived process fi



To access: otherwise not. Oh, it's stuck here for half an hour.



chmod a+x/etc/keepalived/check_nginx_dead.sh



Test the script first:



Stop the Nginx, at this time the keepalived is still running, so will not switch, virtual IP can not access the Web server






Then execute the script:



Host script detection Nginx is not in, the keepalived to stop, from the output can be seen did stop, host virtual not bound virtual IP






Standby: Successfully binding virtual IP






So, just let the script always execute, that is, always detect whether the nginx process is in, if not, then directly stop the host keepalived, switch the standby, to ensure access to the Web server.



Modify the keepalived configuration file keepalived.conf as follows to add script definition instrumentation:


The

Only needs to add the red part in the correct location: Then the script is executed two seconds, once the host Nginx is found, keepalived stop, switch standby



! Configuration file forkeepalived # This is the global Configuration of global_defs {# to specify which object to send email to when switching to. One line Notification_email {acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc} # Notification _email_from alexand.cassen@firewall.loc # SMTP server address #smtp_server 192.168.200.1# Specifies the SMTP connection timeout # smtp_connect_ 30 # run a logo for a timeout keepalived machine router_id lvs_devel} vrrp_ check_nginx_dead {# # monitoring script path "/ etc/keepalived/check_nginx_dead. Sh" # VRRP_INSTANCE VI_1 {Backup Backup machine for Backup state MASTER to keepalived; generally eth0,linux use the ifconfig command to view the current server network card identification name interfaceeth0# Under the same instance (that is,  Under the same group of primary virtual_router_id spares) must be the same virtual_router_id51 # MASTER weight higher than that of BACKUP, MASTER 100 is the maximum of BACKUP priority100 # 99 MASTER and BACKUP synchronous check the time interval between the load balancer, The unit is seconds, set to 1 seconds advert_int1 # Authentication {# master-slave Server authentication method, Pass for plaintext password authentication auth_type pass # pass_auth_pass 1111} track_ {


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.