Docker enables more convenient management of single-machine container virtualization, where Docker is located between the operating system layer and the application layer;
Relative traditional virtualization (Kvm,xen):
Docker can be more flexible to implement some of the application layer functions, while the utilization of resources is also higher
Relative application:
Docker can better combine applications with more operating systems (mirroring), reducing the cost of deployment and maintenance
In such a location in a single use Docker for business deployment can feel the quality of ascension; But for cross-machine, large-scale, business-quality assurance, Docker itself is a little less, and traditional operations automation tools, whether deployed on-premises or managed by Docker, are a bit nondescript.
Kubernetes is large-scale, distributed, and ensures the management of highly available Docker clusters.
1: Understanding Kubernets Concept:
可以把kuberntes理解为容器级别的自动化运维工具, 之前的针对操作系统(linux, windows)的自动化运维工具比如puppet, saltstack, chef所做的工作是确保代码状态的正确, 配置文件状态的正确, 进程状态的正确, 本质是状态本身的维护; 而kubernetes实际上也是状态的维护, 只不过是容器级别的状态维护; 不过kubernetes在容器级别要做到不仅仅状态的维护, 还需要docker跨机器之间通信的问题.
Related concepts
- Label is the label, Kubernetes in the pod, service, RC hit a lot of tags (k/v form of key-value pairs); lable is stored in ETCD (a distributed high-performance, persistent cache); Kubernetes used ETCD to solve the problem of communication between services (message service) and data storage (database) in traditional service.
Architecture implementation
整个架构大体分为控制节点和计算节点; 控制节点发命令, 计算节点干活.
Architecture diagram
First, try to understand the architecture from the diagram itself.
- 1: The Real service is node (compute node), compute node service through proxy, after passing through the firewall out
- 2: Control nodes and compute nodes communicate via REST API
- 3: The user's command needs to be authorized after the call Server API is sent to the system
- 4: Compute node main process is Kubelet and proxy
- 5: Control node is responsible for scheduling, state maintenance
2:kubernetes deployment
Host Environment
- 192.168.56.110
- 192.168.56.111
- 192.168.56.112
- Kubernetes Master
Operating system: CENTOS7
110 and 111 deployment ETCD, 110 as Kubenetes control nodes, 111 and 112 as compute nodes
Environment Preparation :
1:etcd
ETCD is a distributed, high-performance, high-availability key-value storage system developed and maintained by CoreOS, inspired by ZooKeeper and Doozer, which is written in the go language and handles log replication through raft consistency algorithms to ensure strong consistency.
- Simple: Curl accessible user's API (Http+json)
- Security: Optional SSL client certificate authentication
- Fast: Single Instance 1000 writes per second
Reliable: Use raft to ensure consistency
1: Installation package:
Yum Install Etcd-y
2: Edit configuration:/etc/etcd/etcd.conf
# [Member]
etcd_name=192.168.56.110 #member节点名字 to correspond to the etcd_initial_cluster in the back.
Etcd_data_dir= "/var/lib/etcd/default.etcd" #数据存储目录
#ETCD_SNAPSHOT_COUNTER = "10000"
#ETCD_HEARTBEAT_INTERVAL = "100"
#ETCD_ELECTION_TIMEOUT = "1000"
etcd_listen_peer_urls= "http://192.168.56.110:2380" #集群同步地址与端口
etcd_listen_client_urls= "http://192.168.56.110:4001" #client通信端口
#ETCD_MAX_SNAPSHOTS = "5"
#ETCD_MAX_WALS = "5"
#ETCD_CORS = ""
#
#[cluster]
etcd_initial_advertise_peer_urls= "http://192.168.56.110:2380" #peer初始化广播端口
etcd_initial_cluster= "192.168.56.110=http://192.168.56.110:2380,192.168.56.111=http://192.168.56.111:2380" # Cluster member, Format: $ node name =$ node synchronization port node before using "," separate
Etcd_initial_cluster_state= "New" #初始化状态, will change to existing after initialization
Etcd_initial_cluster_token= "Etcd-cluster" #集群名字
etcd_advertise_client_urls= "http://192.168.56.110:4001" #client广播端口
#ETCD_DISCOVERY = ""
#ETCD_DISCOVERY_SRV = ""
#ETCD_DISCOVERY_FALLBACK = "Proxy"
#ETCD_DISCOVERY_PROXY = ""
#
#[proxy]
#ETCD_PROXY = "Off"
#
#[security]
#ETCD_CA_FILE = ""
#ETCD_CERT_FILE = ""
#ETCD_KEY_FILE = ""
#ETCD_PEER_CA_FILE = ""
#ETCD_PEER_CERT_FILE = ""
#ETCD_PEER_KEY_FILE = ""
In addition to the Etcd_initial_cluster project all nodes are consistent, the IP in the other configuration is the native IP
ETCD configuration file does not support comment haha after each line, so in the actual configuration process, you need to delete the comments after each line #
- 3: Start the service
Systemctl Enable ETCD
Systemctl Start ETCD
- 4: Verify
#etcdctl member list
Dial TCP 127.0.0.1:2379:connection refused
#etcd默认连接127.0.0.1 2379 ports, and we 192.168.56.110 4001 Port
# etcdctl-c 192.168.56.110:4001 member List
No endpoints available
#如果依然出现了上面的问题, See if the service starts
# NETSTAT-LNP | grep etcd
TCP 0 0 192.168.56.110:4001 0.0.0.0: LIST EN 18869/ETCD
TCP 0 0 192.168.56.110:2380 0.0.0.0: LISTEN 18869/ETCD #然后查看端口是否畅通
Telnet 192.168.56.111 4001
Trying 192.168.56.111 ...
Connected to 192.168.56.111.
Escape character is ' ^] '.
^c
# etcdctl-c 192.168.56.110:4001 member list
10f1c239a15ba875:name=192.168.56.110 peerurls=http://192. 168.56.110:2380 clienturls=http://192.168.56.110:4001
f7132cc88f7a39fa:name=192.168.56.111 peerURLs=http:// 192.168.56.111:2380 clienturls=http://192.168.56.111:4001
- 5: Prepare
#etcdctl-C 192.168.56.110:4001 mk/coreos.com/network/config ' {"Network": "10.0.0.0/16"} '
{"Network": "10.0.0.0/16"}
# etcdctl-c 192.168.56.110:4001 Get/coreos.com/network/config
{"Network": "10.0.0.0/16"}
The kubenetes behind this configuration will use the
2:kubenetes
3:kubernetes using 3.1 basic applications
Kubenetes management is actually for POD, RC, services management, the command line for Kubenetes management recommendations based on the configuration file, so easier to manage, but also more standardized
Kubectl create-h
Create a resource by filename or stdin.
JSON and YAML formats are accepted.
Usage:
Kubectl create-f FILENAME [flags]
Examples:
Create a pod using the data in Pod.json.
$ kubectl create-f Pod.json
Create a pod based on the JSON passed to stdin.
$ Cat Pod.json | Kubectl Create-f-
Format specification:
Kubectl api-versions
Kind:replicationcontroller #Pod, Replicationcontroller, Service
Metadata: #元数据, mostly name and label
Name:test
Spec: #配置, depending on the kind, the specific configuration items will be different
***
Kubenetes support YAML or JSON file input, JSON with the API to handle the time is more convenient, Yaml is more friendly to people, the following in YAML format.
A typical business structure is similar to this:
+-----------+
| |
| Logic | #逻辑处理服务
| |
+---+--+----+
| |
+----+ +----+
| |
| |
+----V-----+ +----V----+
| | | |
| DB | | Redis | #调用其他服务
| | | |
+----------+ +---------+
idea : Provide a complete set of services within each pod
1: Prepare the mirror
- Postgres: Database Mirroring
- Redis: Caching Service Mirroring
- WeChat: Service Image
2:RC Configuration Wechat-rc.yaml:
Apiversion:v1beta3
Kind:replicationcontroller
Metadata
Name:wechatv4
Labels
Name:wechatv4
Spec
Replicas:1
Selector
Name:wechatv4
Template
Metadata
Labels
Name:wechatv4
Spec
Containers
-Name:redis
Image:redis
Ports
-containerport:6379
-Name:postgres
image:opslib/wechat_db
Ports
-containerport:5432
-Name:wechat
Image:opslib/wechat1
Ports
-CONTAINERPORT:80
Import RC
# Kubectl Create-f Wechat-rc.yaml
Replicationcontrollers/wechat
Confirm
attached:
In Docker, the link function can be used to connect the containers, but there is no such system in kubenetes, but because the same pod is a shared network storage related space, in the WeChat image in the configuration file, The IP in the configuration entry for the connection database and Redis can be written directly to ' 127.0.0.1 ', similar to this:
sql_connection= ' Postgresql://wechat:[email protected]/wechat '
Cached_backend= ' redis://127.0.0.1:6379/0 '
- 3: Service Configuration Wechat-service.yaml
Apiversion:v1beta3
Kind:service
Metadata
Name:wechat
Labels
Name:wechat
Spec
Ports
-PORT:80
Selector
Name:wechatv4
Import
# Kubectl Create-f Wechat-service.yaml
Services/wechat
View
Kubectl Get service WeChat
NAME LABELS SELECTOR IP (s) PORT (s)
WeChat name=wechat name=wechatv4 192.168.56.156 80/tcp
Confirm
# curl-i http://192.168.56.156
http/1.1 OK
content-length:0
Access-control-allow-headers:x-auth-token, Content-type
server:tornadoserver/4.2
Etag: "da39a3ee5e6b4b0d3255bfef95601890afd80709"
Date:mon, Jul 09:04:49 GMT
Access-control-allow-origin: *
Access-control-allow-methods:get, POST, PUT, DELETE
Content-type:application/json
3.2 Business Updates
After the basic business deployment is complete, when the service is to be updated, Kubenetes can take advantage of the rolling update, which basically realizes the hot update of the business.
#kubectl rolling-update wechatv3-f Wechatv3.yaml
Creating WECHATV4
At beginning of Loop:wechatv3 replicas:0, WECHATV4 replicas:1
Updating wechatv3 replicas:0, WECHATV4 replicas:1
At end of LOOP:WECHATV3 replicas:0, WECHATV4 replicas:1
Update succeeded. Deleting WECHATV3
Wechatv4
3.3 Application Management
When the same service needs to start multiple instances, the service itself, but the configuration of the startup service is not the same
In general we may have 3 requirements:
- 1: Different container to set different resource permissions
- 2: Different container mount different directories
- 3: Different container to perform different start commands
You can set different settings for different container in the configuration file.
Apiversion:v1beta3
Kind:replicationcontroller
Metadata
Name:new
Labels
Name:new
Spec
Replicas:1
Selector
Name:new
Template
Metadata
Labels
Name:new
Spec
Containers
-Name:redis
Image:redis
Ports
-containerport:6379
-Name:postgres
image:opslib/wechat_db
Ports
-containerport:5432
-Name:wechat
Image:opslib/wechat1
Command: #container的启动命令有外部定义
-'/bin/bash '
-'-C '
-'/usr/bin/wechat_api '
-'--config=/etc/wechat/wechat.conf '
Resources: #限制container的资源
Request: #请求的资源
CPU: "0.5"
Memory: "512Mi"
Limits: #最大可以使用的资源
CPU: "1"
Memory: "1024Mi"
Ports
-CONTAINERPORT:80
Volumemounts: #挂载目录
-Name:data
Mountpath:/data
Volumes
-Name:data
Reference article:
- Kubernetes System Architecture Introduction: Http://www.infoq.com/cn/articles/Kubernetes-system-architecture-introduction
- ETCD: Key-value storage System for service discovery: Http://www.infoq.com/cn/news/2014/07/etcd-cluster-discovery
- Kubenetes Deployment: http://blog.opskumu.com/k8s-cluster-centos7.html
Wen/harvey_l (author of Jane's book)
Original link: http://www.jianshu.com/p/40d171c3b950
Copyright belongs to the author, please contact the author to obtain authorization, and Mark "book author".
Kubernetes Study Notes