The problem of network error in Kubernetes
System environment#系统版本cat /etc/redhat-releaseCentOS Linux release 7.4.1708 (Core)#kubelet版本kubelet --versionKubernetes v1.10.0#selinux状态getenforceDisabled#系统防火墙状态systemctl status firewalld● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead) Docs: man:firewalld(1)
Pod anomaly Problem#d
# Kubectl get rc# Kubectl get svc# Kubectl get pod
# Kubectl describe pod redis-slave-gsk1p
The main reason for pod creation failure is that the image cannot be pulled from the local warehouse. This error is reported even if the image already exists locally. Because kubernetes's imagePullPolicy obtains the image policy, the default value is Always.
The nginx in our local repository is configured with basic verification, so the following error is reported:Error syncing pod, skipping: failed to
-controller.yaml # kubectl create -f redis-slave-controller.yaml # kubectl create -f frontend-controller.yaml # kubectl create -f redis-master-service.yaml # kubectl create -f redis-slave-service.yaml # kubectl create -f frontend-service.yaml # kubectl get rc# kubectl get svc# kubectl get pod# kubectl describe pod redis-slave-gsk1pThe reason why the pod could not be created is that the image cannot be pulled from the local repository, even if the image is already present locally. Because Kuberne
createdreplicationcontroller/nginx-test createdservice/nginx-test created[[emailprotected] ~]# kubectl get pod -n testNAME READY STATUS RESTARTS AGEnginx-test-ssbnr 1/1 Running 0 4mnginx-test-zl7vk 1/1 Running 0 4m[[emailprotected] ~]# kubectl get service -n testNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEnginx-test NodePort 10.68.145.112 As you can see, ng
Kubernetes about the service exposure is mainly through the Nodeport way, through the binding minion host of a port, then the pod request forwarding and load balancing, but this way the defect is
The service may have many, if each is bound to a node host port, the host needs to open the perimeter of a heap of ports to make services calls, management conf
1. PrefaceKubernetes designed a special network model that deviates from the native Docker network model. In this design, Kubernetes defines an abstract concept: pods, each pod is a collection of containers, and there is a shared IP, and all containers share the same network namespace. Pods can communicate not only with physical machines, but also between containers across the network. Kubernetes's ip-per-pod design idea has many benefits, such as: Fr
to the build image.
In addition, OPS will want this OpenStack lifecycle management system to span bare metal, IaaS, and even PAAs.What Atomic, Docker, Kubernetes bringIf you have an OpenStack service lifecycle management scenario, the following benefits can be created:
Isolated, lightweight, portable, detachable
The service relationship of the
Kubernetes is a distributed cluster of Google's Docker based, with the following main components ETCD: High-availability storage sharing configuration and service discovery, used as a companion to flannel on Minion machines, to enable Docker running on each minion to have different IP segments The ultimate goal is to make the Docker Containner running on different minion have an IP address that is not the
:
-Mountpath: /etc/ssl/certs
name:ca-certificates
readonly:true
-Mountpath:/varName:Grafana-Storage env:
-Name:Influxdb_host value:Monitoring-Influxdb-Name:Gf_server_http_port value: " the"# The following env variables is required to make Grafana accessible via # the Kubernetes API-ServerProxy. on Production clusters,We recommend # Removing these env variables,Setup Auth forGrafana,and expose the Grafana #
1.1. What is Kubernetes?A new approach to distributed architecture based on container technologyA complete distributed system support platformKubernetes is an open source project launched by the Google team, which aims to manage containers across multiple hosts, provide basic deployment, maintenance, and use scaling, primarily to implement the language as the Go language.1.2. Basic ConceptsNode: In Kubernetes
This article transferred from: http://blog.csdn.net/xingwangc2014/article/details/51204224Kubernetes through the kube-apiserver as the entire cluster management portal. Apiserver is the primary management node for the entire cluster, where the user configures and organizes the cluster through Apiserver, while the interactions between the nodes in the cluster and the ETCD store interact with the Apiserver. Apiserver implements a set of Restfull interfaces that allow users to interact directly wit
This is a creation in
Article, where the information may have evolved or changed. [Previous post] (https://studygolang.com/articles/12799) We looked at creating a container engine cluster with [TerraForm] (https://terraform.io/). In this blog post, we look at deploying containers into clusters using the container engine and [Kubernetes] (https://kubernetes.io/). # # Kubernetes First, what is [
configuration is 16C 32G, the application itself does not have a performance problem. As the kubernetes requires a network interconnection, we use two machines when testing the kubernetes.In conclusion, the environment we are going to test is as follows:
Machine to be tested
machine configuration
Number of machines
K8s Flannel Vxlan
16C 32G
2
K8s Flannel HOST-GW
16C 32G
2
install and configure the Gcloud command-line tool and include the Kubectl component (Gcloud components install kubectl). If you do not want to install the Gcloud client on your own machine, you can use Gcloud to perform the same task via Google Cloud script .
Warning: You must set your default calculation service account to include:
To set this item, navigate to the IAM section of the Cloud console and projectNumber-compute@developer.gserviceaccount
"Editor's words" The Kubernetes Scheduler dispatches the pod to the work node according to a specific algorithm and strategy. By default, the Kubernetes scheduler can meet most of the requirements, such as scheduling pods to run on resource-rich nodes, or scheduling pod dispersal to different nodes to make cluster nodes resource balanced. However, in some special scenarios, the default scheduling algorithm
mountpath:/logs
volumes:
-name:app-logs Emptudir
: {}
View Log kubectl logs 5. Configuration management of Pod
The Kubernetes v1.2 version provides a unified cluster configuration management solution –configmap. 5.1. Configmap: Configuration management for container applications
Usage Scenario: Live as an environment variable within a container. Set startup parameters for the container startup command (set to environment variable). Mou
Use Kubernetes step by stepWhy Docker and Kubernetes?
Containers allow us to build, publish, and run distributed applications. They free applications from machine restrictions and allow us to create a complex application in a certain way.
Writing applications using containers can bring development and QA closer to the production environment (if you try to do so ). By doing so, you can publish changes faster
root 453 Mar Heapster-service.yaml-rw-rw-r--1 root root 521 Mar Heapster-deployment.yaml-rw-rw-r--1 root root 695 Mar Grafana-service.yaml-rw-rw-r--1 root root 1417 Mar Grafana-deployment.yaml4. Modify the image corresponding to the configuration file:Grafana-deployment.yamlApiversion:extensions/v1beta1kind:deploymentmetadata:name:monitoring-grafana Namespace:kube-systemspec:replicas : 1 template:metadata:labels:task:monitoring k8s-app:grafana spec:containers:- Name:grafana image:doc
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.