Kubernetes Cluster dashboard plug-in installation

Source: Internet
Author: User
Tags stack trace docker run k8s kubernetes dashboard
This is a creation in Article, where the information may have evolved or changed.


The first time I installed Kubernetes 1.3.7 Cluster with the kube-up.sh script, I have successfully installed Kubernetes dashboard addon OK. So far the operation in this environment is very stable. However, after all, it is a test environment, some configurations can not meet the requirements of the production environment, such as: security issues. Today there is time for dashboard configuration to make some adjustments, incidentally, the installation and configuration of the previous dashboard plug-in is also recorded for your reference.



First, the default installation steps of Dashboard



1, based on the default configuration item installation



The principle of installing dashboard with kube-up.sh on Ubuntu is similar to installing a DNS plug-in, mainly involving script files and configuration items including:


// kubernetes / cluster / config-default.sh
...
# Optional: Install Kubernetes UI
ENABLE_CLUSTER_UI = "$ {KUBE_ENABLE_CLUSTER_UI: -true}"
...

// kubernetes / cluster / ubuntu / deployAddons.sh
...
function deploy_dashboard {
    if $ {KUBECTL} get rc -l k8s-app = kubernetes-dashboard --namespace = kube-system | grep kubernetes-dashboard-v &> / dev / null; then
        echo "Kubernetes Dashboard replicationController already exists"
    else
        echo "Creating Kubernetes Dashboard replicationController"
        $ {KUBECTL} create -f $ {KUBE_ROOT} /cluster/addons/dashboard/dashboard-controller.yaml
    fi

    if $ {KUBECTL} get service / kubernetes-dashboard --namespace = kube-system &> / dev / null; then
        echo "Kubernetes Dashboard service already exists"
    else
        echo "Creating Kubernetes Dashboard service"
        $ {KUBECTL} create -f $ {KUBE_ROOT} /cluster/addons/dashboard/dashboard-service.yaml
    fi

  echo
}

init

...

if ["$ {ENABLE_CLUSTER_UI}" == true]; then
  deploy_dashboard
fi


KUBE-UP.SH will attempt to create a "Kube-system" namespace and execute the following command:


kubectl create -f kubernetes/cluster/addons/dashboard/dashboard-controller.yaml
kubectl create -f kubernetes/cluster/addons/dashboard/dashboard-service.yaml


This is not much different from creating an RC and service in cluster.



Of course, the above installation is accompanied by the installation of k8s cluster, if you want to install dashboard separately, then the Dashboard home page installation method is obviously more simple:


kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml


2, adjust the dashboard container start parameter



The contents of the Dashboard-controller.yaml and Dashboard-service.yaml two files are as follows:


//dashboard-controller.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: kubernetes-dashboard-v1.1.1
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    version: v1.1.1
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
        version: v1.1.1
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: kubernetes-dashboard
        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30

// dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090


The contents of these two files are slightly stale, and are used as replicationcontroller that are currently deprecated.



However, after the default installation, you may also encounter the following issues:



(1) Dashboard pod creation failed: This is caused by the failure of the pull image to kubernetes-dashboard-amd64:v1.1.1 image outside the wall.



Can be solved by using accelerators or by using alternative image, for example: mritd/kubernetes-dashboard-amd64:v1.4.0. Modify the line of image in Dashboard-controller.yaml.



(2) Dashboard cannot connect to API server on master node



If the only dashboard pod (due to Replicas=1) is dispatched to Minion node, it is likely that you will not be able to connect to the API server on master node (dashboard automatically detects the API in cluster server exists, but sometimes fails), causing the page to not display properly. Therefore, you need to specify the URL of the API server, for example: we add a boot parameter –apiserver-host for container boot in Dashboard-controller.yaml:


// dashboard-controller.yaml
... ...
spec:
      containers:
      - name: kubernetes-dashboard
        image: mritd/kubernetes-dashboard-amd64:v1.4.0
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
           - --apiserver-host=http://{api server host}:{api server insecure-port}
... ...


(3) Add Nodeport, provide external access path



Dashboard runs in cluster with the Cluster service role, although we can access the service on node or directly access the pod, but to access the dashboard on the external network, you need to set additional For example: Set Nodeport.



In Dashboard-service.yaml, modify the configuration as follows:


spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 9090
    nodePort: 12345


This allows you to access dashboard through node's public ip+nodeport.



But at this point, your dashboard is "naked", there is no security to say:
-Dashboard UI does not have access management mechanism, any access can take over the dashboard completely;
-At the same time behind, dashboard accesses the apiserver through Insecure-port, without using the encryption mechanism.



Second, dashboard through the Kubeconfig file information access Apiserver



Let's start by establishing a secure communication mechanism between dashboard and apiserver.



The startup parameters for Kube-apiserver on the current master are as follows:


// /etc/default/kube-apiserver

KUBE_APISERVER_OPTS=" --insecure-bind-address=0.0.0.0 --insecure-port=8080 --etcd-servers=http://127.0.0.1:4001 --logtostderr=true --service-cluster-ip-range=192.168.3.0/24 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota --service-node-port-range=80-32767 --advertise-address={master node local ip} --basic-auth-file=/srv/kubernetes/basic_auth_file --client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key"


Dashboard to establish a secure communication mechanism with Apiserver, you must not use insecure port. Kubernetes Apiserver Secure port is also on by default and the port is 6443. At the same time, Apiserver opens the basic auth (–basic-auth-file=/srv/kubernetes/basic_auth_file). This way, dashboard will not be able to access apiserver secure port properly by passing the –apiserver-host parameter and pass basic auth. We need to find another option:



Let's take a look at what cmdline options Dashboard also supports:


# docker run mritd/kubernetes-dashboard-amd64:v1.4.0 /dashboard -help
Usage of /dashboard:
      --alsologtostderr value          log to standard error as well as files
      --apiserver-host string          The address of the Kubernetes Apiserver to connect to in the format of protocol://address:port, e.g., http://localhost:8080. If not specified, the assumption is that the binary runs inside aKubernetes cluster and local discovery is attempted.
      --heapster-host string           The address of the Heapster Apiserver to connect to in the format of protocol://address:port, e.g., http://localhost:8082. If not specified, the assumption is that the binary runs inside aKubernetes cluster and service proxy will be used.
      --kubeconfig string              Path to kubeconfig file with authorization and master location information.
      --log-flush-frequency duration   Maximum number of seconds between log flushes (default 5s)
      --log_backtrace_at value         when logging hits line file:N, emit a stack trace (default :0)
      --log_dir value                  If non-empty, write log files in this directory
      --logtostderr value              log to standard error instead of files (default true)
      --port int                       The port to listen to for incoming HTTP requests (default 9090)
      --stderrthreshold value          logs at or above this threshold go to stderr (default 2)
  -v, --v value                        log level for V logs
      --vmodule value                  comma-separated list of pattern=N settings for file-filtered logging


From the output options point of view, only –kubeconfig this can meet the demand.



1, kubeconfig file Introduction



After the kubernetes default installation with the Kube-up.sh script, the script creates a ~/.kube/config file on each cluster node, which can be kubeconfig components in cluster (such as KUBECTL, etc.), addons (such as dashboard, etc.) provide a security verification mechanism across all cluster.



The following is the Kubeconfig file on my Minion node


# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /srv/kubernetes/ca.crt
    server: https://{master node local ip}:6443
  name: ubuntu
contexts:
- context:
    cluster: ubuntu
    namespace: default
    user: admin
  name: ubuntu
current-context: ubuntu
kind: Config
preferences: {}
users:
- name: admin
  user:
    password: {apiserver_password}
    username: {apiserver_username}
    client-certificate: /srv/kubernetes/kubecfg.crt
    client-key: /srv/kubernetes/kubecfg.key


Kubeconfig stores clusters, users, contexts information, and other miscellaneous, and specifies the current context through Current-context. With this configuration file, cluster operations tools like KUBECTL can easily switch the context between the various cluster. A context is a ternary group: {cluster, user, Namespace},current-context specifies the currently selected context, such as the Kubeconfig file above, when we execute Kubectl, Kubectl reads the configuration file and finds the user and cluster with the information in the context specified by Current-context. Here Current-context is Ubuntu.



The information in the Ubuntu context ternary group is:


{
    cluster = ubuntu
    namespace = default
    user = admin
}


After kubectl to clusters, find the name Ubuntu cluster, find its server for Https://{master node local ip}:6443, and its CA information , find the user named Admin in the users and use the information under the User:


password: {apiserver_password}
    username: {apiserver_username}
    client-certificate: /srv/kubernetes/kubecfg.crt
    client-key: /srv/kubernetes/kubecfg.key


The Kubeconfig file can be configured with the Kubectl Config command, which can be referenced here.



In addition,/SRV/KUBERNETES/CA.CRT,/SRV/KUBERNETES/KUBECFG.CRT and/srv/kubernetes/kubecfg.key are kube-up.sh installed k8s 1.3.7 is created on each node and can be used directly as a parameter to access apiserver to kubectl or other components or addons to access apiserver.



2, modify dashboard startup parameters, use Kubeconfig file



Now that we want dashboard to use the Kubeconfig file, we need to modify the configuration information in the Dashboard-controller.yaml file that involves containers:


spec:
      containers:
      - name: kubernetes-dashboard
        image: mritd/kubernetes-dashboard-amd64:v1.4.0
        volumeMounts:
          - mountPath: /srv/kubernetes
            name: auth
          - mountPath: /root/.kube
            name: config
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
           - --kubeconfig=/root/.kube/config
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: auth
        hostPath:
          path: /srv/kubernetes
      - name: config
        hostPath:
          path: /root/.kube


Due to the use of various certificates and Kubeconfig, we mount the host's path:/root/.kube and/srv/kubernetes in the pod.



Once the dashboard is redeployed, there is a security (Https+basic_auth) between Dashboard and Kube-apiserver.



Third, implement dashboard UI Login



Although the above implementation of the secure channel between dashboard and Apiserver and basic auth, but through the nodeport way to access dashboard, we can still control dashboard, and dashboard still have no access control mechanism. The reality is that dashboard does not currently support identity and access management, but in the near future, dashboard will add support for this.



So in the current version, how to implement a simple login process? In addition to the previously mentioned Nodeport way to access the dashboard UI, the official trouble shooting also provides two other ways to access dashboard, and let's see if we can meet our lowest level requirements ^0^.



1, Kubectl Proxy Way



Kubectl proxy only allows local network access by default, but KUBECTL proxy provides several flags options that can be set, let's try it:



We execute on Minion node:


# kubectl proxy --address='0.0.0.0' --port=30099
Starting to serve on [::]:30099


We have an extranet service on port 30099 on Minion node. Open Browser, visit: http://{minion node public ip}:30099/ui, get the following result:


Unauthorized


Where is it not authorized? We looked at the flag options of KUBECTL Proxy and found one of the following suspects:


--accept-hosts='^localhost$,^127\.0\.0\.1$,^\[::1\]$': Regular expression for hosts that the proxy should accept.


It is clear that the default accepted form of the host address –accept-hosts our access restrictions. Re-adjust the configuration to execute again:


# kubectl proxy --address='0.0.0.0' --port=30099 --accept-hosts='^*$'Starting to serve on [::]:30099


Open Browser again, visit: http://{minion node public ip}:30099/ui



The browser will jump to the following address:


http://{minion node public ip}:30099/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=default


Dashboard UI access is successful! However, this approach still does not require you to enter User/password, which does not meet our requirements.



2. Direct access to Apiserver mode



The last way to access the trouble shooting documentation is to access the Apiserver method directly:


打开浏览器访问:https://{master node public ip}:6443


At this point the browser will prompt you: certificate issues. Ignore it (because Apiserver uses a self-signed private certificate, the browser can not verify the Apiserver server.crt), continue to access the browser popup login dialog box, let you enter the user name and password, here we enter apiserver- Basic-auth-file user name and password, you can successfully login to Apiserver and see the following on the browser page:


{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/apps",
    "/apis/apps/v1alpha1",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/batch/v2alpha1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/apis/policy",
    "/apis/policy/v1alpha1",
    "/apis/rbac.authorization.k8s.io",
    "/apis/rbac.authorization.k8s.io/v1alpha1",
    "/healthz",
    "/healthz/ping",
    "/logs/",
    "/metrics",
    "/swaggerapi/",
    "/ui/",
    "/version"
  ]
}


Next, we visit the following address:


https://{master node public ip}:6443/ui


You will see the page jump to:


https://101.201.78.51:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/


We successfully entered the Dashboard UI! Obviously, this approach satisfies our minimum requirement for login access to the Dashboard UI!



Third, summary



So far, dashboard is ready to use. However, it lacks the metric and class dashboard graphics features, which require an additional installation of Heapster to implement, but the general functionality is sufficient to meet your k8s cluster management needs.



, Bigwhite. All rights reserved.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.