Pure hand-built kubernetes (k8s) 1.9 cluster-(iii) certification authority and service discovery

Source: Internet
Author: User
Tags etcd k8s asymmetric encryption

1. Understanding the Certification Authority 1.1 why certification

To understand the certification, we have to start from the certification to solve what problems, to prevent what problems occur.
What problems do you prevent? is to prevent someone from invading your cluster, root your machine and make our cluster still safe? No, Root has got it, so do whatever it took to get through it.
In fact, network security itself is to solve the problem of how to prevent under certain assumptions. For example, a very important assumption is that the communication network between the two nodes or the IP is untrusted, may be stolen by a third party, or may be tampered with by a third party. Just like when we go to school to send a note to the right girl, the transmission process may be read by other classmates, even the content may be from I like you changed into I do not like you. Of course, this hypothesis is not random, but from the current situation of network technology and the actual occurrence of the problems found, summed up. Kubernetes's certification is also based on this issue.

1.2 Concept Grooming

In order to solve the above-mentioned problem, Kubernetes does not need to find a way, after all, is the network security level of the problem, every service will encounter problems, the industry has a mature solution to solve. Here we look at industry scenarios and related concepts.

    • Symmetric encryption/Asymmetric encryption
      These two concepts belong to cryptography, which is not easy to understand for the students who have not contacted. You can refer to the vivid explanation of the great God: "How to explain asymmetric encryption in plain words"
    • SSL/TLS
      Once we know about symmetric and asymmetric encryption, we can look at the SSL/TLS. Similarly, the great God has summed up a very good introductory article: "Overview of the operating mechanism of the SSL/TLS protocol"
1.3 What is authorization

The concept of authorization is much simpler, that is, what kind of people have what authority, generally through the role as a link to bring them together. That is, a character has multiple privileges on one side and many people on the other. This creates a relationship between the person and the Authority.

2. Kubernetes's certification Authority

All operations of the Kubernetes cluster are basically done through the Kube-apiserver component, which provides HTTP restful APIs for intra-and outside-cluster client calls. It is important to note that the authentication authorization process only exists in the API in the form of HTTPS. That is, if the client uses HTTP to connect to the Kube-apiserver, then authentication is not granted. So, it can be set up, the intra-cluster communication between components using HTTP, the outside of the cluster using HTTPS, which adds security, not too complex.
Apiserver access to the three steps, the first two is authentication and authorization, the third is admission Control, it can also improve the security to some extent, but more is the role of resource management.

2.1 Kubernetes Certification

Kubernetes provides a variety of authentication methods, such as client certificates, static tokens, static password files, Serviceaccounttokens, and so on. You can use one or more authentication methods at the same time. As long as through any one is considered to be certified through. Here are a few common authentication methods that we know.

    • Client certificate authentication
      Client certificate authentication is known as TLS two-way authentication, which is the correctness of mutual authentication certificates between server clients, and coordinates the communication encryption scheme in the correct situation.
      In order to use this scheme, the api-server needs to be opened with the--client-ca-file option.
    • Boot token
      When we have very many node nodes, it is troublesome to manually configure TLS authentication for each node, then we can use the authentication mode of boot token, if it is necessary to open Experimental-bootstrap-token-auth in Api-server attribute, the client's token information is automatically issued to node after it has been authenticated with a pre-defined token matching certificate. Of course, the boot token is a mechanism that can be used in a variety of scenarios.
    • Service Account Tokens Certification
      In some cases, we want to access api-server inside the pod, get information about the cluster, and even make changes to the cluster. In this case, Kubernetes provides a special authentication method: Service account. Service account, like pod, service, and deployment, is a resource in a kubernetes cluster, and users can create their own.
      ServiceAccount contains three main content: namespace, Token, and CA. namespace specifies the Namespace,ca used by the pod to validate the Apiserver certificate, which is used as the authentication token. They are stored in the pod's file system via mount.
2.2 Kubernetes's authorization

The new role access control mechanism (role-based Access,rbac) in the Kubernetes1.6 release allows the Cluster Administrator to have more granular resource access control over the role of a particular user or service account. In RBAC, permissions are associated with roles, and users get permissions to those roles by becoming members of the appropriate role. This greatly simplifies the management of permissions. In an organization where roles are created to accomplish a variety of jobs, users are assigned roles based on their responsibilities and qualifications, and users can easily be assigned to another role from one role.
At present, there are a series of authentication mechanisms in Kubernetes, because the input and preference of Kubernetes community is better than other authentication mechanisms. How the specific RBAC is embodied in the kubernetes system, we will gradually in-depth understanding in the later deployment.

2.3 Kubernetes's Admissioncontrol

Admissioncontrol-access control is essentially an admittance code, in the process of requesting the Kubernetes API, the order is: Authentication & Authorization, then the access operation, and finally the target object. This access code is in Api-server and must be compiled into a binary file to be executed.
When a cluster is requested, each admission control code is executed in a certain order. If an admission control rejects the request, the result of the entire request is returned immediately and the user is prompted with the corresponding error message.
Common components (Control code) are as follows:

    • Alwaysadmit: Allow all requests
    • Alwaysdeny: Prohibit all requests, more for test environments
    • ServiceAccount: It automates serviceaccounts, and it assists ServiceAccount to do something, like if the pod doesn't have a serviceaccount attribute, it automatically adds a default, and ensure that pod ServiceAccount always exists.
    • Limitranger: He observes all the requests and ensures that they do not violate the already defined constraints, which are defined in the Limitrange object in namespace. If you use the Limitrange object in Kubernetes, you must use this plugin.
    • Namespaceexists: It observes all requests, and if the request attempts to create a nonexistent namespace, the request is rejected.
3. Environment Preparation 3.1 Stop the original kubernetes related services

Before we start, we need to stop the basic version of the cluster, including the Service,deployments,pods and all the kubernetes components that are running.

#删除services$ kubectl delete services nginx-service#删除deployments$ kubectl delete deploy kubernetes-bootcamp$ kubectl delete deploy nginx-deployment#停掉worker节点的服务$ service kubelet stop && rm -fr /var/lib/kubelet/*$ service kube-proxy stop && rm -fr /var/lib/kube-proxy/*$ service kube-calico stop#停掉master节点的服务$ service kube-calico stop$ service kube-scheduler stop$ service kube-controller-manager stop$ service kube-apiserver stop$ service etcd stop && rm -fr /var/lib/etcd/*
3.2 Build configuration (all nodes)

As with the basic environment, we need to generate all the relevant configuration files for KUBERNETES-WITH-CA

$ cd ~/kubernetes-starter#按照配置文件的提示编辑好配置$ vi config.properties#生成配置$ ./gen-config.sh with-ca
3.3 Installing Cfssl (all nodes)

Cfssl is a very useful CA tool that we use to generate certificates and secret key files
The installation process is relatively simple, as follows:

#下载$ wget -q --show-progress --https-only --timestamping   https://pkg.cfssl.org/R1.2/cfssl_linux-amd64   https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64#修改为可执行权限$ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64#移动到bin目录$ mv cfssl_linux-amd64 /usr/local/bin/cfssl$ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson#验证$ cfssl version
3.4 Generating the root certificate (master node)

The root certificate is the root of the certificate trust chain, and each component communicates on the premise that there is a certificate (root certificate) that everyone trusts, and that the certificate that everyone uses is issued by this root certificate.

#所有证书相关的东西都放在这$ mkdir -p /etc/kubernetes/ca#准备生成证书的配置文件$ cp ~/kubernetes-starter/target/ca/ca-config.json /etc/kubernetes/ca$ cp ~/kubernetes-starter/target/ca/ca-csr.json /etc/kubernetes/ca#生成证书和秘钥$ cd /etc/kubernetes/ca$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca#生成完成后会有以下文件(我们最终想要的就是ca-key.pem和ca.pem,一个秘钥,一个证书)$ lsca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem
4. Retrofit etcd4.1 Preparation Certificate

ETCD node needs to provide access to other services, it is necessary to verify the identity of other services, so need a server certificate to identify their own monitoring services, when there are multiple ETCD nodes need client certificate and ETCD cluster other nodes to interact with, Of course, you can use the same certificate for both client and server because they are essentially no different.

#etcd证书放在这$ mkdir -p /etc/kubernetes/ca/etcd#准备etcd证书配置$ cp ~/kubernetes-starter/target/ca/etcd/etcd-csr.json /etc/kubernetes/ca/etcd/$ cd /etc/kubernetes/ca/etcd/#使用根证书(ca.pem)签发etcd证书$ cfssl gencert         -ca=/etc/kubernetes/ca/ca.pem         -ca-key=/etc/kubernetes/ca/ca-key.pem         -config=/etc/kubernetes/ca/ca-config.json         -profile=kubernetes etcd-csr.json | cfssljson -bare etcd#跟之前类似生成三个文件etcd.csr是个中间证书请求文件,我们最终要的是etcd-key.pem和etcd.pem$ lsetcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem
4.2 Retrofit ETCD Services

We recommend that you compare the ETCD configuration with the original configuration of the difference, do have a good idea.
You can use the command comparison:

$ cd ~/kubernetes-starter/$ vimdiff kubernetes-simple/master-node/etcd.service kubernetes-with-ca/master-node/etcd.service

Update ETCD Service:

$ cp ~/kubernetes-starter/target/master-node/etcd.service /lib/systemd/system/$ systemctl daemon-reload$ service etcd start#验证etcd服务(endpoints自行替换)$ ETCDCTL_API=3 etcdctl   --endpoints=https://192.168.1.102:2379    --cacert=/etc/kubernetes/ca/ca.pem   --cert=/etc/kubernetes/ca/etcd/etcd.pem   --key=/etc/kubernetes/ca/etcd/etcd-key.pem   endpoint health
5. Retrofit api-server5.1 Preparation Certificate
#api-server证书放在这,api-server是核心,文件夹叫kubernetes吧,如果想叫apiserver也可以,不过相关的地方都需要修改哦$ mkdir -p /etc/kubernetes/ca/kubernetes#准备apiserver证书配置$ cp ~/kubernetes-starter/target/ca/kubernetes/kubernetes-csr.json /etc/kubernetes/ca/kubernetes/$ cd /etc/kubernetes/ca/kubernetes/#使用根证书(ca.pem)签发kubernetes证书$ cfssl gencert         -ca=/etc/kubernetes/ca/ca.pem         -ca-key=/etc/kubernetes/ca/ca-key.pem         -config=/etc/kubernetes/ca/ca-config.json         -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes#跟之前类似生成三个文件kubernetes.csr是个中间证书请求文件,我们最终要的是kubernetes-key.pem和kubernetes.pem$ lskubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem
5.2 Retrofit Api-server Services

View diff

$ cd ~/kubernetes-starter$ vimdiff kubernetes-simple/master-node/kube-apiserver.service kubernetes-with-ca/master-node/kube-apiserver.service

Generate token Authentication File

#生成随机token$ head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘8afdf3c4eb7c74018452423c29433609#按照固定格式写入token.csv,注意替换token内容$ echo "8afdf3c4eb7c74018452423c29433609,kubelet-bootstrap,10001,\"system:kubelet-bootstrap\"" > /etc/kubernetes/ca/kubernetes/token.csv

Update Api-server Service

$ cp ~/kubernetes-starter/target/master-node/kube-apiserver.service /lib/systemd/system/$ systemctl daemon-reload$ service kube-apiserver start#检查日志$ journalctl -f -u kube-apiserver
6. Retrofit Controller-manager

Controller-manager is generally on the same machine as api-server, so you can use non-secure ports to communicate with Api-server, and you do not need to generate certificates and private keys.

6.1 Retrofit Controller-manager Services

View diff

$ cd ~/kubernetes-starter/$ vimdiff kubernetes-simple/master-node/kube-controller-manager.service kubernetes-with-ca/master-node/kube-controller-manager.service

Update Controller-manager Service

$ cp ~/kubernetes-starter/target/master-node/kube-controller-manager.service /lib/systemd/system/$ systemctl daemon-reload$ service kube-controller-manager start#检查日志$ journalctl -f -u kube-controller-manager
7. Retrofit Scheduler

Scheduler is generally on the same machine as apiserver, so you can use non-secure ports to communicate with Apiserver. You do not need to generate a certificate and private key.

7.1 Retrofit Scheduler Services

View diff
The comparison will find that two files are no different and do not need to be reformed

$ cd ~/kubernetes-starter/$ vimdiff kubernetes-simple/master-node/kube-scheduler.service kubernetes-with-ca/master-node/kube-scheduler.service

Start the service

$ service kube-scheduler start#检查日志$ journalctl -f -u kube-scheduler
8. Retrofit kubectl8.1 Preparation Certificate
#kubectl证书放在这,由于kubectl相当于系统管理员,我们使用admin命名$ mkdir -p /etc/kubernetes/ca/admin#准备admin证书配置 - kubectl只需客户端证书,因此证书请求中 hosts 字段可以为空$ cp ~/kubernetes-starter/target/ca/admin/admin-csr.json /etc/kubernetes/ca/admin/$ cd /etc/kubernetes/ca/admin/#使用根证书(ca.pem)签发admin证书$ cfssl gencert         -ca=/etc/kubernetes/ca/ca.pem         -ca-key=/etc/kubernetes/ca/ca-key.pem         -config=/etc/kubernetes/ca/ca-config.json         -profile=kubernetes admin-csr.json | cfssljson -bare admin#我们最终要的是admin-key.pem和admin.pem$ lsadmin.csr  admin-csr.json  admin-key.pem  admin.pem
8.2 Configuring Kubectl
#指定apiserver的地址和证书位置(ip自行修改)$ kubectl config set-cluster kubernetes         --certificate-authority=/etc/kubernetes/ca/ca.pem         --embed-certs=true         --server=https://192.168.1.102:6443#设置客户端认证参数,指定admin证书和秘钥$ kubectl config set-credentials admin         --client-certificate=/etc/kubernetes/ca/admin/admin.pem         --embed-certs=true         --client-key=/etc/kubernetes/ca/admin/admin-key.pem#关联用户和集群$ kubectl config set-context kubernetes         --cluster=kubernetes --user=admin#设置当前上下文$ kubectl config use-context kubernetes#设置结果就是一个配置文件,可以看看内容$ cat ~/.kube/config

Verifying the Master node

#可以使用刚配置好的kubectl查看一下组件状态$ kubectl get componentstatusNAME                 STATUS    MESSAGE              ERRORscheduler            Healthy   okcontroller-manager   Healthy   oketcd-0               Healthy   {"health": "true"}
9. Retrofit calico-node9.1 Preparation Certificate

Follow-up can see the calico certificate used in four places:

    • Calico/node This Docker container run-time Access ETCD use certificate
    • In the MLM configuration file, the MLM plugin needs access to the ETCD use certificate
    • Calicoctl accessing ETCD using certificates when operating a clustered network
    • Calico/kube-controllers synchronizing cluster network policies when accessing ETCD using certificates
      #calico证书放在这$ mkdir -p /etc/kubernetes/ca/calico#准备calico证书配置 - calico只需客户端证书,因此证书请求中 hosts 字段可以为空$ cp ~/kubernetes-starter/target/ca/calico/calico-csr.json /etc/kubernetes/ca/calico/$ cd /etc/kubernetes/ca/calico/#使用根证书(ca.pem)签发calico证书$ cfssl gencert     -ca=/etc/kubernetes/ca/ca.pem     -ca-key=/etc/kubernetes/ca/ca-key.pem     -config=/etc/kubernetes/ca/ca-config.json     -profile=kubernetes calico-csr.json | cfssljson -bare calico#我们最终要的是calico-key.pem和calico.pem$ lscalico.csr  calico-csr.json  calico-key.pem  calico.pem
9.2 Retrofit Calico Services

View diff

$ cd ~/kubernetes-starter$ vimdiff kubernetes-simple/all-node/kube-calico.service kubernetes-with-ca/all-node/kube-calico.service

By diff you will find that Calico has several certification-related files:
/etc/kubernetes/ca/ca.pem
/etc/kubernetes/ca/calico/calico.pem
/etc/kubernetes/ca/calico/calico-key.pem
Since the Calico service is required for all nodes to be started, you need to copy these files to each server

Update Calico Service

$ cp ~/kubernetes-starter/target/all-node/kube-calico.service /lib/systemd/system/$ systemctl daemon-reload$ service kube-calico start#验证calico(能看到其他节点的列表就对啦)$ calicoctl node status
10. Retrofit Kubelet

We have here to let Kubelet use Boot token authentication method, so the authentication method is different from the previous component, its certificate is not manually generated, but by the work node TLS BootStrap to Api-server request, by the master node Controller-manager Automatically issued.

10.1 Creating a role Binding (master node)

The way the token is booted requires the client to tell him your username and token when requesting the Api-server, and the user has a specific role: System:node-bootstrapper, so the bootstrap token must first be The Kubelet-bootstrap user in the file assigns this particular role, and then Kubelet has permission to initiate the creation of the authentication request.
Execute the following command on the master node

#可以通过下面命令查询clusterrole列表$ kubectl -n kube-system get clusterrole#可以回顾一下token文件的内容$ cat /etc/kubernetes/ca/kubernetes/token.csv8afdf3c4eb7c74018452423c29433609,kubelet-bootstrap,10001,"system:kubelet-bootstrap"#创建角色绑定(将用户kubelet-bootstrap与角色system:node-bootstrapper绑定)$ kubectl create clusterrolebinding kubelet-bootstrap          --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
10.2 Creating Bootstrap.kubeconfig (Work node)

This configuration is used to complete the bootstrap token authentication, saving such as user, token and other important authentication information, this file can be generated with the help of Kubectl command: (You can also write their own configuration)

#设置集群参数(注意替换ip)$ kubectl config set-cluster kubernetes         --certificate-authority=/etc/kubernetes/ca/ca.pem         --embed-certs=true         --server=https://192.168.1.102:6443         --kubeconfig=bootstrap.kubeconfig#设置客户端认证参数(注意替换token)$ kubectl config set-credentials kubelet-bootstrap         --token=8afdf3c4eb7c74018452423c29433609         --kubeconfig=bootstrap.kubeconfig#设置上下文$ kubectl config set-context default         --cluster=kubernetes         --user=kubelet-bootstrap         --kubeconfig=bootstrap.kubeconfig#选择上下文$ kubectl config use-context default --kubeconfig=bootstrap.kubeconfig#将刚生成的文件移动到合适的位置$ mv bootstrap.kubeconfig /etc/kubernetes/
10.3 Preparing the configuration of the MLM

View diff

$ cd ~/kubernetes-starter$ vimdiff kubernetes-simple/worker-node/10-calico.conf kubernetes-with-ca/worker-node/10-calico.conf

Copy Configuration

$ cp ~/kubernetes-starter/target/worker-node/10-calico.conf /etc/cni/net.d/
10.4 Retrofit Kubelet Services

View diff

$ cd ~/kubernetes-starter$ vimdiff kubernetes-simple/worker-node/kubelet.service kubernetes-with-ca/worker-node/kubelet.service

Update Service

$ cp ~/kubernetes-starter/target/worker-node/kubelet.service /lib/systemd/system/$ systemctl daemon-reload$ service kubelet start#启动kubelet之后到master节点允许worker加入(批准worker的tls证书请求)#--------*在主节点执行*---------$ kubectl get csr|grep ‘Pending‘ | awk ‘{print $1}‘| xargs kubectl certificate approve#-----------------------------#检查日志$ journalctl -f -u kubelet
11. Retrofit kube-proxy11.1 Preparation Certificate
#proxy证书放在这$ mkdir -p /etc/kubernetes/ca/kube-proxy#准备proxy证书配置 - proxy只需客户端证书,因此证书请求中 hosts 字段可以为空。#CN 指定该证书的 User 为 system:kube-proxy,预定义的 ClusterRoleBinding system:node-proxy 将User system:kube-proxy 与 Role system:node-proxier 绑定,授予了调用 kube-api-server proxy的相关 API 的权限$ cp ~/kubernetes-starter/target/ca/kube-proxy/kube-proxy-csr.json /etc/kubernetes/ca/kube-proxy/$ cd /etc/kubernetes/ca/kube-proxy/#使用根证书(ca.pem)签发calico证书$ cfssl gencert         -ca=/etc/kubernetes/ca/ca.pem         -ca-key=/etc/kubernetes/ca/ca-key.pem         -config=/etc/kubernetes/ca/ca-config.json         -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy#我们最终要的是kube-proxy-key.pem和kube-proxy.pem$ lskube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem
11.2 Generating Kube-proxy.kubeconfig Configuration
#设置集群参数(注意替换ip)$ kubectl config set-cluster kubernetes         --certificate-authority=/etc/kubernetes/ca/ca.pem         --embed-certs=true         --server=https://192.168.1.102:6443         --kubeconfig=kube-proxy.kubeconfig#置客户端认证参数$ kubectl config set-credentials kube-proxy         --client-certificate=/etc/kubernetes/ca/kube-proxy/kube-proxy.pem         --client-key=/etc/kubernetes/ca/kube-proxy/kube-proxy-key.pem         --embed-certs=true         --kubeconfig=kube-proxy.kubeconfig#设置上下文参数$ kubectl config set-context default         --cluster=kubernetes         --user=kube-proxy         --kubeconfig=kube-proxy.kubeconfig#选择上下文$ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig#移动到合适位置$ mv kube-proxy.kubeconfig /etc/kubernetes/kube-proxy.kubeconfig
11.3 Retrofit Kube-proxy Services

View diff

$ cd ~/kubernetes-starter$ vimdiff kubernetes-simple/worker-node/kube-proxy.service kubernetes-with-ca/worker-node/kube-proxy.service

After diff you should find that Kube-proxy.service has not changed

Start the service

#如果之前的配置没有了,可以重新复制一份过去$ cp ~/kubernetes-starter/target/worker-node/kube-proxy.service /lib/systemd/system/$ systemctl daemon-reload#安装依赖软件$ apt install conntrack#启动服务$ service kube-proxy start#查看日志$ journalctl -f -u kube-proxy
12. Retrofit Kube-dns

Kube-dns is somewhat special, because it runs in the Kubernetes cluster itself, in the form of kubernetes applications. So its authentication authorization method is different from the previous component. It requires service account authentication and RBAC authorization.
Service Account Certification:
Each service account automatically generates its own secret, which contains a ca,token and secret for api-server authentication
RBAC Authorization:
permissions, roles, and role bindings are kubernetes automatically created. We just need to create a serviceaccount called Kube-dns, which is already included in the official existing configuration.

12.1 Preparing the configuration file

We add variables on an official basis to generate configurations that fit our cluster. Just copy it.

$ cd ~/kubernetes-starter$ vimdiff kubernetes-simple/services/kube-dns.yaml kubernetes-with-ca/services/kube-dns.yaml

You can see that the diff has only one place, and the new configuration does not have a api-server set. Do not access api-server, how does it know each service's cluster IP and pod endpoints? This is because Kubernetes will inject all of the services ' IP, ports and other information in the same way as the environment variables when starting each service.

12.2 Creating Kube-dns
$ kubectl create -f ~/kubernetes-starter/target/services/kube-dns.yaml#看看启动是否成功$ kubectl -n kube-system get pods
13. Try sledgehammer again.

Finally, the secure version of the Kubernetes cluster we deployed is complete.
Next we use the new cluster to review the previously learned commands, and then recognize some new commands, new parameters, new features. Also, please see the video tutorial for details.

Attached: Full content directory 0, order one, pre-preparation environment two, the core module deployment three, authentication authorization and service discovery

Pure hand-built kubernetes (k8s) 1.9 cluster-(iii) certification authority and service discovery

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.