Kubernetes cluster DNS plug-in installation

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.


In the previous article on kubernetes cluster installation, we established a minimal available k8s cluster, but unlike the 1.12-based Docker built with cluster management, k8s is a loosely coupled set of components that serve externally. In addition to the core components, other components are provided in add-on form, such as Kube-dns in clusters, k8s dashboard, and so on. Kube-dns is an important plug-in for k8s to complete the registration and discovery of service within the cluster. As the k8s installation and management experience improves, the DNS plug-in is bound to become part of the k8s default installation. This article will be based on an article with you to understand kubernetes installation, and further explore the installation of the DNS components "routines" ^_^ and the troubleshooting of the problem.



First, the installation of the premise and principle



As mentioned above, k8s installation according to provider different, we here is based on the premise of Provider=ubuntu, the use of the installation script is maintained by the Zhejiang University team. So if your provider is another option, the content described in this article may not apply. However, it is also helpful to understand the installation principle of the DNS components under Provider=ubuntu, and the other installation methods in general.



Under Cluster/ubuntu that deploys the k8s installation working directory, in addition to the download-release.sh used to install the core components, Util.sh, we saw another script deployaddons.sh, the script content is not many, the structure is very clear, the general steps are:


init
deploy_dns
deploy_dashboard


As you can see, this script is the two common plugins used to deploy k8s: DNS and dashboard. Further analysis, found that the implementation of DEPLOYADDONS.SH is also based on the configuration in./cluster/ubuntu/config-default.sh, several of the relevant configurations include:


# Optional: Install cluster DNS.
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
# DNS_SERVER_IP must be a IP in SERVICE_CLUSTER_IP_RANGE
DNS_SERVER_IP=${DNS_SERVER_IP:-"192.168.3.10"}
DNS_DOMAIN=${DNS_DOMAIN:-"cluster.local"}
DNS_REPLICAS=${DNS_REPLICAS:-1}


deployaddons.sh first generates the Skydns-rc.yaml and Skydns-svc.yaml two k8s profiles based on the above configuration, and then creates a DNS service from KUBECTL create.



Second, install k8s DNS



1. Fitting



In order for the deployaddons.sh script to perform only the DNS component installation, you need to set the environment variable first:


export KUBE_ENABLE_CLUSTER_UI=false


To execute the installation script:


# KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
Creating kube-system namespace...
The namespace 'kube-system' is successfully created.

Deploying DNS on Kubernetes
replicationcontroller "kube-dns-v17.1" created
service "kube-dns" created
Kube-dns rc and service is successfully deployed.


Seems to be going well. Let's take a look through the kubectl (note: Because the DNS service was created in a namespace named Kube-system, Kubectl will not be able to find the DNS service if it executes the namespace name):


# kubectl --namespace=kube-system get services
NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kube-dns               192.168.3.1053/UDP,53/TCP   1m

root@iZ25cn4xxnvZ:~/k8stest/1.3.7/kubernetes/cluster/ubuntu# kubectl --namespace=kube-system get pods
NAME                                    READY     STATUS              RESTARTS   AGE
kube-dns-v17.1-n4tnj                    0/3       ErrImagePull        0          4m


When looking at the pod corresponding to the DNS component, it is found that ready is 0/3,status "errimagepull" and the DNS service is not really up.



2, modify the Skydns-rc.yaml



Let's fix the problem above. Under the Cluster/ubuntu, We found two more files: Skydns-rc.yaml and Skydns-svc.yaml, and these two files are the two K8s service profiles generated by deployaddons.sh execution, based on the configuration in config-default.sh. The problem is in the Skydns-rc.yaml. In this file, we see the mirror name of the three containers included in the pod that the DNS service started:


gcr.io/google_containers/kubedns-amd64:1.5
gcr.io/google_containers/kube-dnsmasq-amd64:1.3
gcr.io/google_containers/exechealthz-amd64:1.1


During this installation, I did not configure the accelerator (VPN). Therefore, there was an error in the image file on pull Gcr.io. In the absence of accelerators, we can easily find alternatives on the Docker hub (since the domestic network connection Docker hub is slow and often unable to connect, it is recommended to manually pull out these three alternate images first):


gcr.io/google_containers/kubedns-amd64:1.5
=> chasontang/kubedns-amd64:1.5

gcr.io/google_containers/kube-dnsmasq-amd64:1.3
=> chasontang/kube-dnsmasq-amd64:1.3

gcr.io/google_containers/exechealthz-amd64:1.1
=> chasontang/exechealthz-amd64:1.1


We need to manually replace the three image names in the Skydns-rc.yaml. And to prevent deployaddons.sh from regenerating skydns-rc.yaml, we need to comment out the following two lines in deployaddons.sh:


#sed -e "s/\\\$DNS_REPLICAS/${DNS_REPLICAS}/g;s/\\\$DNS_DOMAIN/${DNS_DOMAIN}/g;" "${KUBE_ROOT}/cluster/saltbase/salt/kube-dns/skydns-rc.yaml.sed" > skydns-rc.yaml
#sed -e "s/\\\$DNS_SERVER_IP/${DNS_SERVER_IP}/g" "${KUBE_ROOT}/cluster/saltbase/salt/kube-dns/skydns-svc.yaml.sed" > skydns-svc.yaml


To remove the DNS service:


# kubectl --namespace=kube-system delete rc/kube-dns-v17.1 svc/kube-dns
replicationcontroller "kube-dns-v17.1" deleted
service "kube-dns" deleted


Perform deployaddons.sh again to redeploy the DNS component (not to repeat). After installation, we'll check to see if the installation is OK, and this time we'll just use Docker PS to see if the three containers in the pod are all up:


# docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
e8dc52cba2c7        chasontang/exechealthz-amd64:1.1           "/exechealthz '-cmd=n"   7 minutes ago       Up 7 minutes                            k8s_healthz.1a0d495a_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_b42e68fc
f1b83b442b15        chasontang/kube-dnsmasq-amd64:1.3          "/usr/sbin/dnsmasq --"   7 minutes ago       Up 7 minutes                            k8s_dnsmasq.f16970b7_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_da111cd4
d9f09b440c6e        gcr.io/google_containers/pause-amd64:3.0   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD.a6b39ba7_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_b198b4a8


It seems kube-dns this mirrored container did not start successfully. The Docker ps-a confirms this:


# docker ps -a
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS                       PORTS               NAMES
24387772a2a9        chasontang/kubedns-amd64:1.5               "/kube-dns --domain=c"   3 minutes ago       Exited (255) 2 minutes ago                       k8s_kubedns.cdbc8a07_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_473144a6
3b8bb401ac6f        chasontang/kubedns-amd64:1.5               "/kube-dns --domain=c"   5 minutes ago       Exited (255) 4 minutes ago                       k8s_kubedns.cdbc8a07_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_cdd57b87


Check the container log for the Kube-dns container in the Stop state:


# docker logs 24387772a2a9
I1021 05:18:00.982731       1 server.go:91] Using https://192.168.3.1:443 for kubernetes master
I1021 05:18:00.982898       1 server.go:92] Using kubernetes APII1021 05:18:00.983810       1 server.go:132] Starting SkyDNS server. Listening on port:10053
I1021 05:18:00.984030       1 server.go:139] skydns: metrics enabled on :/metrics
I1021 05:18:00.984152       1 dns.go:166] Waiting for service: default/kubernetes
I1021 05:18:00.984672       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I1021 05:18:00.984697       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I1021 05:18:01.292557       1 dns.go:172] Ignoring error while waiting for service default/kubernetes: the server has asked for the client to provide credentials (get services kubernetes). Sleeping 1s before retrying.
E1021 05:18:01.293232       1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: the server has asked for the client to provide credentials (get services)
E1021 05:18:01.293361       1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server has asked for the client to provide credentials (get endpoints)
I1021 05:18:01.483325       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I1021 05:18:01.483390       1 dns.go:539] records:[], retval:[], path:[local cluster svc default kubernetes]
I1021 05:18:01.582598       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
... ...

I1021 05:19:07.458786       1 dns.go:172] Ignoring error while waiting for service default/kubernetes: the server has asked for the client to provide credentials (get services kubernetes). Sleeping 1s before retrying.
E1021 05:19:07.460465       1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server has asked for the client to provide credentials (get endpoints)
E1021 05:19:07.462793       1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: the server has asked for the client to provide credentials (get services)
F1021 05:19:07.867746       1 server.go:127] Received signal: terminated


From the log up, should be kube-dns to connect Apiserver failed, retry a certain number of times, quit. From the log, the address of Kubernetes API server in Kube-dns view is:


I1021 05:18:00.982731       1 server.go:91] Using https://192.168.3.1:443 for kubernetes master


And in fact our k8s apiserver monitoring insecure port is 8080,secure port is 6443 (because there is no explicit configuration, 6443 is the default port in the source code), through the https+ 443-Port Access Apiserver will no doubt end in failure. The problem is found, and the next step is how to solve it.



3, designated –kube-master-url



Let's take a look at the command-line arguments that the Kube-dns command can pass in:


# docker run -it chasontang/kubedns-amd64:1.5 kube-dns --help
Usage of /kube-dns:
      --alsologtostderr[=false]: log to standard error as well as files
      --dns-port=53: port on which to serve DNS requests.
      --domain="cluster.local.": domain under which to create names
      --federations=: a comma separated list of the federation names and their corresponding domain names to which this cluster belongs. Example: "myfederation1=example.com,myfederation2=example2.com,myfederation3=example.com"
      --healthz-port=8081: port on which to serve a kube-dns HTTP readiness probe.
      --kube-master-url="": URL to reach kubernetes master. Env variables in this flag will be expanded.
      --kubecfg-file="": Location of kubecfg file for access to kubernetes master service; --kube-master-url overrides the URL part of this; if neither this nor --kube-master-url are provided, defaults to service account tokens
      --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace
      --log-dir="": If non-empty, write log files in this directory
      --log-flush-frequency=5s: Maximum number of seconds between log flushes
      --logtostderr[=true]: log to standard error instead of files
      --stderrthreshold=2: logs at or above this threshold go to stderr
      --v=0: log level for V logs
      --version[=false]: Print version information and quit
      --vmodule=: comma-separated list of pattern=N settings for file-filtered logging


As you can see: –kube-master-url this command-line option can fulfill our demands. We need to revise Skydns-rc.yaml again:


args:
         # command = "/ kube-dns"
         ---domain = cluster.local.
         ---dns-port = 10053
         ---kube-master-url = http: //10.47.136.60: 8080 # add a new line


Re-deploy the DNS Addon again without repeating it. View Kube-dns Service Information after deployment:


# kubectl --namespace=kube-system  describe service/kube-dns
Name:            kube-dns
Namespace:        kube-system
Labels:            k8s-app=kube-dns
            kubernetes.io/cluster-service=true
            kubernetes.io/name=KubeDNS
Selector:        k8s-app=kube-dns
Type:            ClusterIP
IP:            192.168.3.10
Port:            dns    53/UDP
Endpoints:        172.16.99.3:53
Port:            dns-tcp    53/TCP
Endpoints:        172.16.99.3:53
Session Affinity:    None
No events


To view the logs of the Kube-dns container directly through Docker logs:


docker logs 2f4905510cd2
I1023 11:44:12.997606       1 server.go:91] Using http://10.47.136.60:8080 for kubernetes master
I1023 11:44:13.090820       1 server.go:92] Using kubernetes API v1
I1023 11:44:13.091707       1 server.go:132] Starting SkyDNS server. Listening on port:10053
I1023 11:44:13.091828       1 server.go:139] skydns: metrics enabled on :/metrics
I1023 11:44:13.091952       1 dns.go:166] Waiting for service: default/kubernetes
I1023 11:44:13.094592       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I1023 11:44:13.094606       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I1023 11:44:13.104789       1 server.go:101] Setting up Healthz Handler(/readiness, /cache) on port :8081
I1023 11:44:13.105912       1 dns.go:660] DNS Record:&{192.168.3.182 0 10 10  false 30 0  }, hash:6a8187e0
I1023 11:44:13.106033       1 dns.go:660] DNS Record:&{kubernetes-dashboard.kube-system.svc.cluster.local. 0 10 10  false 30 0  }, hash:529066a8
I1023 11:44:13.106120       1 dns.go:660] DNS Record:&{192.168.3.10 0 10 10  false 30 0  }, hash:bdfe50f8
I1023 11:44:13.106193       1 dns.go:660] DNS Record:&{kube-dns.kube-system.svc.cluster.local. 53 10 10  false 30 0  }, hash:fdbb4e78
I1023 11:44:13.106268       1 dns.go:660] DNS Record:&{kube-dns.kube-system.svc.cluster.local. 53 10 10  false 30 0  }, hash:fdbb4e78
I1023 11:44:13.106306       1 dns.go:660] DNS Record:&{kube-dns.kube-system.svc.cluster.local. 0 10 10  false 30 0  }, hash:d1247c4e
I1023 11:44:13.106329       1 dns.go:660] DNS Record:&{192.168.3.1 0 10 10  false 30 0  }, hash:2b11f462
I1023 11:44:13.106350       1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 443 10 10  false 30 0  }, hash:c3f6ae26
I1023 11:44:13.106377       1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 0 10 10  false 30 0  }, hash:b9b7d845
I1023 11:44:13.106398       1 dns.go:660] DNS Record:&{192.168.3.179 0 10 10  false 30 0  }, hash:d7e0b1e
I1023 11:44:13.106422       1 dns.go:660] DNS Record:&{my-nginx.default.svc.cluster.local. 0 10 10  false 30 0  }, hash:b0f41a92
I1023 11:44:16.083653       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I1023 11:44:16.083950       1 dns.go:539] records:[0xc8202c39d0], retval:[{192.168.3.1 0 10 10  false 30 0  /skydns/local/cluster/svc/default/kubernetes/3262313166343632}], path:[local cluster svc default kubernetes]
I1023 11:44:16.084474       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I1023 11:44:16.084517       1 dns.go:539] records:[0xc8202c39d0], retval:[{192.168.3.1 0 10 10  false 30 0  /skydns/local/cluster/svc/default/kubernetes/3262313166343632}], path:[local cluster svc default kubernetes]
I1023 11:44:16.085024       1 dns.go:583] Received ReverseRecord Request:1.3.168.192.in-addr.arpa.


Through the log you can see that the Apiserver URL is correct, the Kube-dns component no longer output errors, the installation seems to be successful, but also need to test the validation.



Third, test verify k8s DNS



As expected, the k8s DNS component can do DNS resolution for the service within the k8s cluster. The current k8s cluster default namespace has been deployed with the following services:


# kubectl get services
NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   192.168.3.1443/TCP   10d
my-nginx     192.168.3.17980/TCP    6d


We tried to ping and Curl My-nginx service in a myclient container in the k8s cluster:



Ping My-nginx Parse succeeded (find My-nginx clusterip:192.168.3.179):


root@my-nginx-2395715568-gpljv:/# ping my-nginx
PING my-nginx.default.svc.cluster.local (192.168.3.179): 56 data bytes


The Curl My-nginx Service also has the following successful results:


# curl -v my-nginx
* Rebuilt URL to: my-nginx/
* Hostname was NOT found in DNS cache
*   Trying 192.168.3.179...
* Connected to my-nginx (192.168.3.179) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: my-nginx
> Accept: */*
>
< HTTP/1.1 200 OK
* Server nginx/1.10.1 is not blacklisted
< Server: nginx/1.10.1
< Date: Sun, 23 Oct 2016 12:14:01 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Tue, 31 May 2016 14:17:02 GMT
< Connection: keep-alive
< ETag: "574d9cde-264"
< Accept-Ranges: bytes
<


Welcome to nginx!



If you see this page, the Nginx Web server is successfully installed andworking. Further configuration is required.



For online documentation and refer tonginx.org. Commercial support is available atnginx.com.



Thank for using Nginx.


* Connection #0 to host My-nginx left intact


The DNS configuration of the client container, which should be the default configuration for k8s installation (related to config-default.sh):


# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 192.168.3.10
options timeout:1 attempts:1 rotate
options ndots:5


To this, the K8s DNS component is installed OK.



, Bigwhite. All rights reserved.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.