Kubernetes (k8s) basic concept

Source: Internet
Author: User
Tags etcd k8s aliyun

k8s Basic Concepts1. NodeNode as a working node in the cluster, running a real application, the smallest running unit kubernetes managed on node is the pod. Node runs Kubernetes's kubelet, Kube-proxy service processes, which are responsible for pod creation, startup, monitoring, restart, destruction, and load balancing for software patterns. The information that node contains:
    • node address: The IP address of the host, or node ID.
    • node operation Status: Pending, Running, terminated three kinds of states.
    • node Condition: ...
    • node system Capacity: Describes the system resources that node can use, including CPU, memory, maximum number of scheduled pods, and so on.
    • Other: Kernel version number, kubernetes version, and so on.
To view node information:kubectl describe node2. Podpod is the most basic operation unit of Kubernetes, contains one or more tightly related containers, a pod can be regarded as the "logical host" of the application layer by a containerized environment; multiple container applications in a pod are typically tightly coupled, and pods are created on node, Start or destroy; each pod runs a special container called pause, and the other containers are business containers that share the network stack and the volume mount volume of the pause container, so that communication and data exchange between them is more efficient, At design time we can take advantage of this feature to put a set of closely related service processes into the same pod. the containers in the same pod can communicate with each other only via localhost. an app container in a pod shares the same set of resources:
    • pid namespace: Different applications in the pod can see the process ID of other applications;
    • Network namespace: Multiple containers in the pod can access the same IP and port range;
    • IPC namespace: Multiple containers in the pod can communicate using SYSTEMV IPC or POSIX Message Queuing;
    • uts namespace: Multiple containers in the pod share a host name;
    • volumes (Shared storage volume): Each container in the pod can access the Volumes defined at the pod level;
the life cycle of the pod is managed by the replication controller, defined by a template, then assigned to a node to run, and the pod ends after the pod contains the container to run. Kubernetes designed a unique network configuration for pods, including assigning an IP address to each pod, using the pod name as the host name for inter-container communication. 3. Servicein the world of Kubernetes, although each pod is assigned a separate IP address, the IP address disappears as the pod is destroyed, which raises the question of how to access a group of pods that make up a cluster to provide services. service! A service can be seen as a set of external access interfaces for pods that provide the same service, and the service acts on which pods are defined by the label selector.
    • Have a specified name (such as My-mysql-server);
    • Have a virtual IP (Cluster IP, Service IP or VIP) and port number, destroyed before the change, only intranet access;
    • Ability to provide some kind of remote service;
    • is mapped to a set of container applications that provide this service capability;
specify public IP and nodeport, or external load balancer if service is to provide extranet services;Second, kubernetes overall structureMaster and Nodekubernetes divides the machines in the cluster into a master node and a group of work nodes (node). Among them, the master node runs a group of processes related to cluster management Etcd, API Server, Controller Manager, Scheduler, and the last three components constitute the Kubernetes Master control Center, which realizes the whole cluster resource management, Pod scheduling, elastic scaling, security control, system monitoring and error correction management functions, and all are automatically completed. On each node, run Kubelet, Proxy,DockerDaemon Three components, responsible for managing the lifecycle of pods on this node, and implementing Service Broker functions. Processa request to create an RC is submitted through KUBECTL, which is written to Etcd through API server, at which time the controller manager hears the RC event through the API server's interface for monitoring resource changes, and after analysis, found that the current cluster does not have its corresponding pod instance, so according to the pod template definition in RC to generate a Pod object, through the API server write Etcd, next, this event is scheduler discovered, it immediately executes a complex scheduling process, Select a settled node for this new Pod and then write the results to ETCD via API server, then the Kubelet process running on the target node monitors the "new" Pod through API server and, according to its definition, Start the pod and take responsibility for the rest of it until the end of the pod's life. later, we submit a new create request for the service mapped to the pod via Kubectl, and Controller Manager queries the associated pod instance through the label tag, and then generates the service's endpoints information. and through API server write to Etcd, next, all the proxy processes running on node query and listen to the service object and its corresponding endpoints information through API server. Create a software-mode load balancer to enable service access to the back-end pod traffic forwarding function.
    • Etcd
used to persist all resource objects in the storage cluster, such as node, Service, Pod, RC, namespace, etc., API server provides the encapsulation interface API for Operation ETCD. These APIs are basically the interface that the resource objects in the cluster are changed and deleted and monitored. Etcd is a key-value storage repository for configuring sharing and service discovery. Etcd was inspired by zookeeper and Doozer. ETCD StorageETCD storage is divided into two parts: internal storage and persistent (hard disk) storage. In-memory storage in addition to the sequential recording of all user changes to the node data records, the user data will be indexed, build heaps and other convenient query operations. Persistence uses Wal for record storage. In k8s, the storage of all data and the operation record are stored in ETCD, so ETCD is very important for k8s clusters, which can cause the whole cluster to be paralyzed or data lost if it fails. In the Wal system, all data is logged before it is submitted. The directory of persistent storage is divided into two: snap and Wal. Snapshot equivalent to data compression, the default will be 10,000 Wal operation record Merge into snapshot, save storage, and ensure that the data is not lost. WAL: Store Change records for all transactionsSnapshot: Data for storing all directories at a time ETCD
    • API Server
provides a unique operation entry for the resource object, all other components must operate the resource data through the API it provides, and the related resource data "full query" + "Change monitoring", these components can be very "real-time" to complete the relevant business functions.
    • Controller Manager
The main purpose of the Management control center in the cluster is to realize the automation of the fault detection and recovery of the kubernetes cluster, such as to finish the pod copy or remove according to RC definition, to ensure that the number of pod instances conforms to the definition of RC copy, and according to the management relationship between service and Pod, Completion of the creation and updating of the endpoints object of the service; Other tasks such as node discovery, management and status monitoring, disk space for dead containers, and cleanup of locally cached image files are also done by controller manager.
    • Scheduler
The scheduler in the cluster, which is responsible for the scheduling and allocation of pod in the cluster node.
    • Kubelet
responsible for the creation, modification, monitoring, deletion and other life cycle management of pod on this node, and kubelet the status information of this node to API server.
    • Proxy
a load balancer that implements the agent and software mode of the service. clients access the Kubernetes system through the KUBECTL command-line tool or Kubectl proxy, and clients within the Kubernetes cluster can manage the cluster directly using KUBERCTL commands. Kubectl Proxy is a reverse proxy for API server, and clients outside the Kubernetes cluster can access the API server through Kubernetes Proxy. API Server has a complete set of security mechanisms, including authentication, authorization and access control and other related modules. ###################################Installing Kubernetesmaster:192.168.1.50node:192.168.1.117Master:1. Install kubernetes Etcd flannel on MasterYum Install kubernetes etcd flannel-y2. Modify the configuration file/etc/kubernetes/controller-manager#指定key文件, key files can be generated by command: OpenSSL genrsa-out/etc/kubernetes/service.key 2048kube_controller_manager_args= "--service_account_private_key_file=/etc/kubernetes/service.key"3 Modifying the configuration file/etc/kubernetes/apiserver# The address on the local server to listen to. Set to all listenerskube_api_address= "--address=0.0.0.0"# Comma separated list of nodes in the ETCD cluster. Specify the address of the ETCD nodekube_etcd_servers= "--etcd-servers=http://192.168.1.50:2379"# Address range to use for services. This is where you set up the IP segment where the service will runkube_service_addresses= "--SERVICE-CLUSTER-IP-RANGE=192.168.100.0/24"# ADD Your own! specify key filekube_api_args= "--service_account_key_file=/etc/kubernetes/service.key"4 Modifying the configuration file/etc/kubernetes/config# How the Controller-manager, scheduler, and proxy find the Apiserverkube_master= "--master=http://192.168.1.50:8080"5 Modifying the configuration file/etc/etcd/etcd.confetcd_listen_client_urls= "http://0.0.0.0:2379"etcd_advertise_client_urls= "http://192.168.1.50:2379"6 Starting services for ETCD and Kubernetessystemctl start Etcd kube-apiserver kube-controller-manager Kube-schedulersystemctl enable Etcd kube-apiserver Kube-controller-manager Kube-scheduler7 Modifying the configuration file/etc/sysconfig/flanneld# ETCD URL location. Point the server where ETCD runsflannel_etcd= "http://192.168.1.50:2379"8 Setting the value of ETCD config# Create a file content as follows, the network specified by IP will be a segment of the future container run:Cat/etc/sysconfig/flannel-config.json{" Network": "172.16.0.0/16","Subnetlen":"Backend": {"Type": "Vxlan","VNI": 1  }}# Configure Config for etc via file contentsEtcdctl Set Atomic.io/network/config < Flannel-config.json9 starting the Flannel servicesystemctl start FlanneldSystemctl Enable Flanneldhere Master installs the configuration and looks at the master information:Kubectl Cluster-info----------------------------node1 Installing kubernetes and flannelyum-y Install kubernetes flannel2 modifying/etc/kubernetes/configkube_master= "--master=http://192.168.1.50:8080"3 modifying/etc/kubernetes/kubelet# The address for the info server to serve in (set to 0.0.0.0 or "" for all Interfaces)kubelet_address= "--address=0.0.0.0"# Leave this blank to use the actual hostnamekubelet_hostname= "--hostname-override=vm7.cluster.com"# Location of the Api-serverkubelet_api_server= "--api-servers=http://192.168.1.50:8080"4 modifying/etc/sysconfig/flanneld# ETCD URL location. Point the server where ETCD runsflannel_etcd= "http://192.168.1.50:2379"5 Start flannel and kube related services# Note that node and master start on the Kube service is not the samesystemctl start Flanneld kube-proxy kubeletsystemctl enable Flanneld kube-proxy Kubelet# restart Docker servicesystemctl Restart DockerThe node node is configured so that you can perform the following command on master to view the nodes informationKubectl Get Nodes###########################################Exampledeploy a container with Kubernetes1) Create a web_server image---on any node# vim Dockfile# Create NewFrom CentOSMaintainer user1 <[email protected]># Update Yum repostoryRUN curl-s-l http://mirrors.aliyun.com/repo/Centos-7.repo-o \/etc/yum.repos.d/centos7-base.repo && \curl-s-l http://mirrors.aliyun.com/repo/epel-7.repo-o \/etc/yum.repos.d/epel7.repoRUN yum Clean all && \yum Makecache fast && \yum-y Updateyum-y Install httpdRUN Yum Clean AllEXPOSECMD ["-D", "FOREGROUND"]entrypoint ["/usr/sbin/httpd"]# docker build-t web_server.# Docker ImagesREPOSITORY TAG IMAGE ID CREATED SIZEWeb_server latest 875ba006f185 9 seconds ago 337 MBdocker.io/centos latest e934aafc2206 hours ago 199 MB"Export this image, import it on another node, and of course you can use the Docker build directly"[# http://mirrors.aliyun.com/repo/centos-7.repo]2) Create a pod in masterCat Pod-webserver.yamlapiversion:v1 kind:pod metadata:name:httpd spec:containers:-name:httpd image:web_server ports:-Containerpor T:80 volumemounts: # defines a mount directory/var/www/html-name:httpd-storage # mount point inside Container mountpath:/var/www/html Volumes:-Name:httpd-storage Hostpath: # mount the/var/docker/disk01 directory to a window/var/www/html path:/var/docker/disk01Create:# kubectl Create-f pod-webserver.yamlUse the command to view the state that was created, the state is pending[in preparation],running state [has been created successfully]# Kubectl Get podsSee which node the container is on# Kubectl get pods-o wideView all the states of the container# Kubectl Get pods Httpd-o yamlTest---on node nodes# echo ${hostname} >/var/docker/disk01/index.html# Curl http://10.1.15.2Delete Pod# kubectl Delete pod httpd

Kubernetes (k8s) basic concept

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.