Deploying Nginx Portal Services for kubernetes services in a cluster

Source: Internet
Author: User
Tags nginx reverse proxy k8s
This is a creation in Article, where the information may have evolved or changed.

This time, has been engaged in the kubernetes related to the east: like what kubernetes cluster, DNS plug-in installation and configuration, integrated ceph RBD Persistent volume, Private Registry Image Library access, etc. This is due to the need for a small, PAAs-like platform that is being developed: "The platform is small, perfectly formed." The entire platform is hosted by the Kubernetes cluster, and there is currently a lack of service access for services within the k8s cluster. The earlier "Nginx configuration in kubernetes cluster hot-update scheme" article is actually a prelude to the design of the entrance scheme, and this article is a description of the Nginx Portal Service deployment design and implementation of some of the pits encountered.

A brief introduction of Nginx entrance scheme

Nginx as a cluster portal service, functionally speaking, is generally acting as a reverse proxy and load balancing role. Here it is more used for reverse proxies, because the load-balancing thing "handover" gave k8s to achieve. K8s by clusterip-A VIP mechanism, default load sharing based on iptables to implement load balancing of service requests (such as rules for iptable NAT table:-M Statistic–mode random–probability 0.33332999982) to view the rules for the iptables NAT chain, you can see the following example:

# iptables -t nat -nL... ...Chain KUBE-SVC-UQG6736T32JE3S7H (2 references)target     prot opt source               destinationKUBE-SEP-Z7UQLD332S673VAF  all  --  0.0.0.0/0            0.0.0.0/0            /* default/nginx-kit: */ statistic mode random probability 0.50000000000KUBE-SEP-TWOIACCAJCPK3HWO  all  --  0.0.0.0/0            0.0.0.0/0            /* default/nginx-kit: */... ..

Next, let's briefly talk about our Nginx entry scheme. In advance: This is definitely not an ideal solution, because it has a lot of flaws, only in the current platform requirements context and resource constraints, it can be used as one of our available transition scenarios, the scenario is as follows:

    • Nginx in the form of Kubernetes service running in the k8s cluster internal, and limit can only be k8s dispatched to the node with Label:role=entry;
    • The outermost, through the DNS domain name polling mechanism, realizes the user request in node this layer "the load balance";
    • A request to access a nodeip:nodeport is forwarded to Nginx Clusterip:port and distributed to multiple real endpoints of the Nginx service via iptables NAT load mechanism;
    • The Nginx program on real endpoint processes the user request and, depending on the configuration, proxy_pass the request to the clusterip:port of the backend service, and ultimately the K8S implementation distributes the request evenly to the endpoint of the backend service.

Second, Nginx Portal service Deployment

Before deploying, let's label the node running the Nginx pod:

# kubectl label node/10.47.136.60 role=entrynode "10.47.136.60" labeled# kubectl label node/10.47.136.60 role=entrynode "10.47.136.60" labeled# kubectl get nodes --show-labelsNAME            STATUS    AGE       LABELS10.46.181.146   Ready     39d       beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=10.46.181.146,role=entry,zone=ceph10.47.136.60    Ready     39d       beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=10.47.136.60,role=entry,zone=ceph

In the Nginx configuration hot load Scenario article, we mention a yaml example of a nginx pod that contains three Container:nginx, Nginx-conf-generator, and init Container,nginx service as follows:

Nginx-kit.yamlapiversion:extensions/v1beta1kind:deploymentmetadata:name:nginx-kitspec:replicas:2 template:m                Etadata:labels:run:nginx-kit annotations:pod.beta.kubernetes.io/init-containers: ' [{ "Name": "Nginx-kit-init-container", "image": "Registry.cn-beijing.aliyuncs.com/xxxx/nginx-con F-generator "," Imagepullpolicy ":" Ifnotpresent "," command ": ["/root/conf-generator/nginx-conf- Gen ","-mode "," gen-once "]," volumemounts ": [{" Name ":" Conf-volume "      , "Mountpath": "/ETC/NGINX/CONF.D"}]}] ' Spec: Containers:-Name:nginx-conf-generator volumemounts:-Mountpath:/etc/nginx/conf.d nam E:conf-volume image:registry.cn-beijing.aliyuncs.com/xxxx/nginx-conf-generator:latest IMAGEPULLPOLICY:IFN Otpresent-name:xXxx-nginx volumemounts:-mountpath:/ETC/NGINX/CONF.D name:conf-volume image:registry.cn- Hangzhou.aliyuncs.com/xxxx/nginx:latest imagepullpolicy:ifnotpresent command: ["/home/auto-reload-nginx.sh"        ] Ports:-containerport:80 Volumes:-Name:conf-volume emptydir: {} nodeselector:  Role:entry---apiversion:v1kind:servicemetadata:name:nginx-kit labels:run:nginx-kitspec:type:nodeport Ports:-port:80 nodeport:28888 protocol:tcp Selector:run:nginx-kit

There are a few things we have to say about this yaml:

1. About Init container

With the Yaml file content above, we can see that both init container and Nginx-conf-generator container are created based on the same image, but the work mode is different. In the deployment description file, the description of the Init container needs to be placed under Deployment.spec.template.metadata, not deployment metadata below. If the latter is written, then the Init container will not be created and started, and Nginx container will be prompted after startup: "Default.conf" is not found.

Also, although it originated from the same image, Init container started with a hint that an executable program called "-mode" could not be found in $path, and obviously entrypoint in init container did not work. Excerpts from Nginx-conf-generator's dockerfile are as follows:

//DockerfileFrom ubuntu:14.04... ...ENTRYPOINT ["/root/conf-generator/nginx-conf-gen"]

For this reason we have added an executable full path for container execution in the command commands parameter of init container:

 "command" : ["/root/conf-generator/nginx-conf-gen", "-mode", "gen-once"],

Finally, creating the Nginx-kit service through the Yaml file above still needs to be kubectl apply, not kubectl create, otherwise init container will not be ignored.

2, about Nginx conf template

For various reasons, we are currently mapping the different service in the backend cluster through the server host's location path, and the Nginx default.conf template is as follows:

server {    listen 80;    #server_name opp.neusoft.com;    {{range .}}    location {{.Path}} {        proxy_pass http://{{.ClusterIP}}:{{.Port}}/;        proxy_redirect off;        proxy_set_header Host $host;        proxy_set_header X-Real-IP $remote_addr;        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;    }    {{end}}    #error_page  404              /404.html;    # redirect server error pages to the static page /50x.html    #    error_page   500 502 503 504  /50x.html;    location = /50x.html {        root   /usr/share/nginx/html;    }}

Note Here is the Proxy_pass directive after the face value of the wording, if you choose to write:

proxy_pass http://{{.ClusterIP}}:{{.Port}};

Then when a path is accessed, such as: Localhost/volume/api/v1/pools, the URL access path that the Nginx backend service receives will be:/volume/api/v1/pools,volume this location Path is not removed, and the backend service is basically error-prone when routing matches. The fix method is to give Proxy_pass directive the following values:

proxy_pass http://{{.ClusterIP}}:{{.Port}}/;

Yes, at the end, add a "/" so that the service of Nginx reverse proxy will receive the access URL path such as/api/v1/pools.

, Bigwhite. All rights reserved.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.