The reference urls:https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/ ubuntu-calico.mdhttps://github.com/projectcalico/calico-docker/blob/master/docs/kubernetes/ Kubernetesintegration.md I have 3 hosts:10.11.151.97, 10.11.151.100, 10.11.150.101. Unfortunately, there is no Internet access in all 3 hosts. Following the guide, I-Build the Kubernetes cluster in ' Bash command ' mode, rather than the ' service mode ' described in th E reference.10.11.151.97 is the Kubernetes master and the other of the other and its nodes.1, Run Etcd Cluster
etcd_token=kb3-etcd-clusterlocal_name=kbetcd0local_ip=10.11.151.97local_peer_port=4010local_client_port1= 4011local_client_port2=4012node1_name=kbetcd1node1_ip=10.11.151.100node1_port=4010node2_name=kbetcd2node2_ip= 10.11.151.101node2_port=4010 ./etcd-name $local _name-initial-advertise-peer-urls/http $local _ip: $local _ Peer_port-listen-peer-urls http://0.0.0.0: $local _peer_port-listen-client-urls http://0.0.0.0: $local _client_port1 , http://0.0.0.0: $local _client_port2-advertise-client-urls/http $local _ip: $local _client_port1,http://$local _ip : $local _client_port2-initial-cluster-token $etcd _token-initial-cluster $local _name=http://$local _ip: $local _peer_ Port, $node 1_name=http://$node 1_ip: $node 1_port, $node 2_name=http://$node 2_ip: $node 2_port-initial-cluster-state New &
in each host, run ETCD as this command since the ETCD should run in cluster mode. If succeed, you should see ' published {Name: *} to cluster * ' output. 2, Setup Master2.1 Start KubernetesRun Kube-apiserver:
./kube-apiserver--logtostderr=true--v=0--etcd_servers=http://127.0.0.1:4012--kubelet_port=10250--allow_ Privileged=false--SERVICE-CLUSTER-IP-RANGE=172.16.0.0/12--insecure-bind-address=0.0.0.0--insecure-port=8080 2 >&1 > Apiserver.out &
Run Kube-controller-manager:
./kube-controller-manager--logtostderr=true--v=0--master=http://tc-151-97:8080--cloud-provider= "" 2>&1 >controller.out &
Run Kube-scheduler:
./kube-scheduler--logtostderr=true--v=0--master=http://tc-151-97:8080 2>&1 > Scheduler.out &
2.2 Install Calico in on Master
sudo etcd_authority=127.0.0.1:4011./calicoctl node
3, Setup Nodes3.1 Install CalicoFor the nodes has no Internet access, I downloaded the calico plugin mannual from:
https://github.com/projectcalico/calico-kubernetes/releases/tag/v0.6.0
Move the plugin to the Kubernetes plugin directory:
sudo mv Calico_kubernetes/usr/libexec/kubernetes/kubelet-plugins/net/exec/calico/calico
Start the Calico:
sudo etcd_authority=127.0.0.1:4011./calicoctl node
3.2 Start Kubelet with Calico network:Start the Kubelet with--network-plugin parameter:
./kube-proxy--logtostderr=true--v=0--master=http://tc-151-97:8080--proxy-mode=iptables &./kubelet-- Logtostderr=true--v=0--api_servers=http://tc-151-97:8080--address=0.0.0.0--network-plugin=calico--allow_ Privileged=false--pod-infra-container-image=10.11.150.76:5000/kubernetes/pause:latest &
Here is the kubelet command output:
I1124 15:11:52.226324 28368 server.go:808] watching apiserverI1124 15:11:52.393448 28368 plugins.go:56] Registering Credential provider:. dockercfgI1124 15:11:52.398087 28368 server.go:770] Started kubeletE1124 15:11:52.398190 28368 KUBELET.GO:756] Image garbage collection failed:unable to find data for container/i1124 15:11:52.398165 28368 server.go: Starting to listen in 0.0.0.0:10250w1124 15:11:52.401695 28368 kubelet.go:775] Failed to move Kubelet to container "/k Ubelet ": Write/sys/fs/cgroup/memory/kubelet/memory.swappiness:invalid argumentI1124 15:11:52.401748 28368 Kubelet.go:777] Running in container "/kubelet" I1124 15:11:52.497377 28368 factory.go:194] System is using systemdI1124 15 : 11:52.610946 28368 kubelet.go:885] Node tc-151-100 was previously registeredI1124 15:11:52.734788 28368 factory.go:236] Registering Docker factoryI1124 15:11:52.735851 28368 factory.go:93] registering Raw factoryI1124 15:11:52.969060 28368 m ANAGER.GO:1006] Started watching for new ooms in MAnagerI1124 15:11:52.969114 28368 oomparser.go:199] OOM parser using kernel log file: "/var/log/messages" I1124 15:11:52.97 0296 28368 manager.go:250] Starting recovery of all containersI1124 15:11:53.148967 28368 manager.go:255] Recovery complet edI1124 15:11:53.240408 28368 manager.go:104] Starting to sync pod status with apiserverI1124 15:11:53.240439 28368 Kubele T.GO:1953] Starting Kubelet main sync loop.
I do not know wheather the Kubelet are run right. Someone tell me what to verify it? I do the same process in another node. 3, Create some pods and test.
apiversion:v1kind:replicationcontrollermetadata:name:test-1spec: replicas:1template:metadata:labels:app:test-1spec:containers:-Name:iperfimage:10.11.150.76:5000/openxxs/iperf : 1.2nodeselector:kubernetes.io/hostname:tc-151-100---apiVersion:v1kind:ReplicationControllermetadata:name: test-2spec:replicas:1template:metadata:labels:app:test-2spec:containers:-name:iperfimage:10.11.150.76:5000/ OPENXXS/IPERF:1.2NODESELECTOR:KUBERNETES.IO/HOSTNAME:TC-151-100---apiversion:v1kind: replicationcontrollermetadata:name:test-3spec:replicas:1template:metadata:labels:app:test-3spec:containers:- NAME:IPERFIMAGE:10.11.150.76:5000/OPENXXS/IPERF:1.2NODESELECTOR:KUBERNETES.IO/HOSTNAME:TC-151-101---apiVersion : V1kind:replicationcontrollermetadata:name:test-4spec:replicas:1template:metadata:labels:app:test-4spec: containers:-name:iperfimage:10.11.150.76:5000/openxxs/iperf:1.2nodeselector:kubernetes.io/hostname:tc-151-101
./kubectl Create-f Test.yaml
This command is create 4 pods, 2 for 10.11.151.100, and 2 for 10.11.151.101.
[@tc_151_97/home/domeos/openxxs/bin]#./kubectl get podsname Ready STATUS restarts agetest-1-1ztr2 1/1 Running 0 5mtest-2 -8P2SR 1/1 Running 0 5mtest-3-1hkwa 1/1 Running 0 5MTEST-4-JBDBQ 1/1 Running 0 5m
[@tc -151-100/home/domeos/openxxs/bin]# Docker pscontainer ID IMAGE COMMAND CREATED STATUS PORTS names6dfc83ec1d12 10.11. 150.76:5000/openxxs/iperf:1.2 "/block" 6 minutes ago up 6 minutes K8s_iperf.a4ede594_test-1-1ztr2_default_ F1b54d0b-927c-11e5-a77a-782bcb435e46_ca4496d078087a93da00 10.11.150.76:5000/openxxs/iperf:1.2 "/block" 6 minutes ago up 6 minutes k8s_iperf.a4ede594_test-2-8p2sr_default_f1c2da7d-927c-11e5-a77a-782bcb435e46_330d815cf80a1474f4c4 10.11.150.76:5000/kubernetes/pause:latest "/pause" 6 minutes ago up 6 minutes K8s_pod.34f4dfd2_test-2-8p2sr_default_ F1c2da7d-927c-11e5-a77a-782bcb435e46_af7199c0eb14879757e6 10.11.150.76:5000/kubernetes/pause:latest "/pause" 6 Minutes ago up 6 minutes k8s_pod.34f4dfd2_test-1-1ztr2_default_f1b54d0b-927c-11e5-a77a-782bcb435e46_ AF2CC1C38ACCFF535FF9 calico/node:latest "/sbin/start_runit" minutes ago up minutes Calico-node
In the node 10.11.151.100, the Calico Status:
[@tc -151-100 ~/baoquanwang/calico-docker-utils]$ sudo etcd_authority=127.0.0.1:4011./calicoctl statuscalico-node Container is running. Status:up minutesrunning Felix version 1.2.0 IPv4 BGP status+---------------+-------------------+-------+---------- +------------------------------------------+| Peer Address | Peer Type | State | Since | Info |+---------------+-------------------+-------+----------+------------------------------------------+| 10.11.151.101 | Node-to-node Mesh | Start | 07:18:44 | Connect Socket:connection refused | | 10.11.151.97 | Node-to-node Mesh | Start | 07:07:40 | Active Socket:connection refused |+---------------+-------------------+-------+----------+----------------------- -------------------+ IPv6 BGP status+--------------+-----------+-------+-------+------+| Peer Address | Peer Type | State | Since | Info |+--------------+-----------+-------+-------+------+ +--------------+-----------+-------+-------+------+
However, in another node 10.11.151.101:
[@tc -151-101 ~/baoquanwang/calico-docker-utils]$ sudo etcd_authority=127.0.0.1:4011./calicoctl statuscalico-node Container is running. Status:up 2 minutesrunning Felix version 1.2.0 IPv4 BGP statusunable to connect to server control socket (/ETC/SERVICE/BI RD/BIRD.CTL): Connection refused IPv6 BGP status+--------------+-----------+-------+-------+------+| Peer Address | Peer Type | State | Since | Info |+--------------+-----------+-------+-------+------+ +--------------+-----------+-------+-------+------+
What has happened?
And that, there are no calico IP route in both nodes:
[@tc -151-100 ~/baoquanwang/calico-docker-utils]$ ip routedefault via 10.11.151.254 dev em1 proto static metric 102410.11. 151.0/24 Dev em1 proto kernel scope link src 10.11.151.100172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42. 1
[@tc -151-101 ~/baoquanwang/calico-docker-utils]$ ip routedefault via 10.11.151.254 dev em1 proto static metric 102410.11. 151.0/24 Dev em1 proto kernel scope link src 10.11.151.101172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42. 1
There is no log output in/var/log/calico/kubernetes/calico.log.
Calico for Kubernetes