What is 1.1 kubernetes?
First, it is a new, container-based, distributed architecture leading solution. is an open source version of Google's Borg (large-scale cluster management system).
Second, if the system design follows the Kubernetes design idea, then the traditional system architecture and business does not have much to do with the underlying code or function modules (such as load balancing, service self-care framework, service monitoring, fault handling, etc.), can disappear. Using Kubernetes, not only can save at least 30% of development cost, more focus on business, and because Kubernetes provides a powerful automation mechanism, so the operation and maintenance of the late system and operation of the cost greatly reduced.
However, Kubernetes is an open platform. Unlike the Java EE, it is not limited to any language, no programming interface is defined, so no matter whether it is a service that is written with the kubernetes, Go, C + +, or Python, it is not difficult to map to the services of the and interact with the standard TCP.
In addition, since the Kubernetes platform has no intrusion into existing programming languages, programming frameworks, middleware, existing systems can easily be upgraded and migrated to the Kubernetes platform.
Finally, Kubernetes is a complete distributed system support platform. Complete cluster management capabilities, including multi-layered security and access mechanisms, multi-tenant application support capabilities, transparent service registration and discovery mechanisms, built-in intelligent load balancers, robust fault detection and self-healing capabilities, service rolling upgrades and online capacity expansion, scalable resource scheduling, and Multi-granularity resource quota management capability. At the same time, Kubernetes provides a comprehensive range of management tools that cover all aspects of development, deployment testing, and operational monitoring.
Therefore, Kubernetes is a new distributed architecture solution based on container technology, and it is a complete distributed system development and support platform with one-stop.
Service is the core of a distributed cluster, and a service object has the following key features:
- Have a unique named name
- Have a virtual IP and port number
- Ability to provide remote service
- is mapped to a set of container applications that provide this service capability
Service processes are based on socket communication for external services (such as Redis, Memcache, MYSQL, Web Server), and a service is typically serviced by multiple related service processes. Each service process has a separate endpoint (ip+port) access point, but kubernetes allows us to connect to the specified service via the service (virtual cluster IP + service Port). With Kubernetes's built-in transparent load balancing and failback mechanism, no matter how many service processes are in the backend, or whether a service process is failing and redeploying to another machine, it does not affect the normal invocation of our team service.
Containers provide a powerful isolation capability, so it is necessary to put this set of processes serving services into containers for isolation. To do this, Kubernetes provides a Pod object that wraps each service process into the appropriate pod, making it a container (Container) that runs in the pod. to establish the relationship between service and pod, kubernetes first labels each pod and then assigns a label selector (label Selector) to the corresponding service. This cleverly solves the problem of service-pod correlation.
Pod:pod runs in the node environment, which can be a physical machine or a virtual machine that can be a public or private cloud , typically running hundreds of pods on a node. Each pod runs a pause container, and other business containers that share the network stack and volume mount volumes of the pause container, so that communication and data exchange between them is more efficient, at design time, We can make full use of this feature to put a set of closely related service processes into the same POD; Finally, not every pod and its running container can be mapped to a service, and only a set of pods that provide services will be mapped into one.
In the area of cluster management, kubernetes divides the machines in the cluster into a master node and a group of work nodes (node). Among them, a cluster management-related set of processes Kube-apiserver, Kube-controller-manager and Kube-scheduler are running on the master node, which realizes resource management, pod scheduling, elastic scaling, Security control, system monitoring and error correction management functions, and are fully automated. Node runs the real application as a working node in the cluster. The smallest running unit managed by kubernetes on node is the pod. Node runs Kubernetes's kubelet, Kube-proxy service processes, which are responsible for pod creation, startup, monitoring, restart, destruction, and load balancers that implement software patterns.
Finally, take a look at the two challenges of service scaling and service escalation: In a kubernetes cluster, you only need to create a replication Controller (RC) for the pod associated with the service that needs to be expanded, and the problem of scaling up and upgrading for that service is resolved. Include the following 3 key information in an RC definition file:
- Definition of the target pod
- Number of replicas required to run for the target pod (replicas)
- Label of the target pod to monitor (label)
After the RC is created, kubernetes filters out the corresponding pod instances with the label defined in RC and monitors their status and quantity in real time, and if the number of instances is less than the number of replicas (replicas), a new pod is created based on the pod template defined in RC. This pod is then dispatched to the appropriate node to start running until the number of pod instances reaches the intended target. This process is completely automated and requires no manual intervention. With the expansion of the Rc,kubernetes service into a purely simple number game, just modify the number of copies in RC. Subsequent service upgrades will also be done by modifying the RC from the start.
1.2 Why use Kubernetes?
The fundamental reason is that it has always been a new technology-driven industry.
Docker is widely used, from a single machine to a cluster has become inevitable, cloud computing is accelerating the process. Kubernetes as the only widely recognized and optimistic Docker distributed solution, it is foreseeable that in the next few years there will be a large number of systems to choose from, whether they are running locally or hosted on public clouds.
What are the benefits of using kubernetes?
First, the team is streamlined, and a system engineer is responsible for the deployment and operation of Kubernetes.
Second, the use of kubernetes is a complete embrace of the microservices architecture.
Our system can then be relocated to the public cloud at any time. In the Kubernetes architecture, the details of the underlying network are completely masked.
Finally, the Kubernetes system architecture has a super-strong horizontal expansion capacity.
1.3 Start with a simple example
"Kubernetes authoritative Guide 2nd edition" Learning (a) kubernetes is what