Kubernetes Study Notes

Source: Internet
Author: User
Tags etcd kubernetes docker kubernetes deployment

Docker enables more convenient management of single-machine container virtualization, where Docker is located between the operating system layer and the application layer;

    • Relative traditional virtualization (Kvm,xen):

      Docker can be more flexible to implement some of the application layer functions, while the utilization of resources is also higher

    • Relative application:

      Docker can better combine applications with more operating systems (mirroring), reducing the cost of deployment and maintenance

      In such a location in a single use Docker for business deployment can feel the quality of ascension; But for cross-machine, large-scale, business-quality assurance, Docker itself is a little less, and traditional operations automation tools, whether deployed on-premises or managed by Docker, are a bit nondescript.

Kubernetes is large-scale, distributed, and ensures the management of highly available Docker clusters.

1: Understanding Kubernets Concept:

可以把kuberntes理解为容器级别的自动化运维工具, 之前的针对操作系统(linux, windows)的自动化运维工具比如puppet, saltstack, chef所做的工作是确保代码状态的正确, 配置文件状态的正确, 进程状态的正确, 本质是状态本身的维护; 而kubernetes实际上也是状态的维护, 只不过是容器级别的状态维护; 不过kubernetes在容器级别要做到不仅仅状态的维护, 还需要docker跨机器之间通信的问题.

Related concepts
    • 1:pod

      • A pod is a collection of containers, each of which can contain one or more containers; For ease of management the same container that runs the same business in a pod
      • Containers of the same pod share the same system stack (network, storage)
      • The same pod can only run on the same machine
    • 2:replicateion Controller

      • Because this name is too long, the following are replaced with RC (Kubernetes also know that the name is longer, also use RC instead)
      • RC is the management pod, RC is responsible for the cluster at any time there is a certain number of pods in operation, more automatic kill, less automatic addition;
      • RC will use a pre-defined pod template to create pods; Pod instances that are running after a successful creation do not change as the template changes;
      • RC corresponds to pod by selector (a system label).
      • When the number of pods defined in RC is changed, the RC will automatically be running in the same number of pods as defined
      • RC also has a magical mechanism:
        • rolling updates; For example, now that a service has 5 running pods, the pod itself is now in the business of being updated and can be replaced by a mechanism to implement the entire RC update
    • 3:service

      • Services-as-a-service, an interface that truly provides services, and the ability to force pod-provided services into the extranet, with one or more pods per service backend
    • 4:lable
    • Label is the label, Kubernetes in the pod, service, RC hit a lot of tags (k/v form of key-value pairs); lable is stored in ETCD (a distributed high-performance, persistent cache); Kubernetes used ETCD to solve the problem of communication between services (message service) and data storage (database) in traditional service.
Architecture implementation

整个架构大体分为控制节点和计算节点; 控制节点发命令, 计算节点干活.


Architecture diagram


First, try to understand the architecture from the diagram itself.

    • 1: The Real service is node (compute node), compute node service through proxy, after passing through the firewall out
    • 2: Control nodes and compute nodes communicate via REST API
    • 3: The user's command needs to be authorized after the call Server API is sent to the system
    • 4: Compute node main process is Kubelet and proxy
    • 5: Control node is responsible for scheduling, state maintenance
2:kubernetes deployment

Host Environment

    • 192.168.56.110
      • Etcd
      • Kubernetes Master
    • 192.168.56.111
      • Etcd
      • Kubernetes Master
    • 192.168.56.112
      • Kubernetes Master
        Operating system: CENTOS7

110 and 111 deployment ETCD, 110 as Kubenetes control nodes, 111 and 112 as compute nodes

Environment Preparation :

    • To install the Epel source:
         Yum Install Epel-release
    • Shutting down the firewall
        Systemctl Stop Firewalld
      Systemctl Disable FIREWALLD
1:etcd

ETCD is a distributed, high-performance, high-availability key-value storage system developed and maintained by CoreOS, inspired by ZooKeeper and Doozer, which is written in the go language and handles log replication through raft consistency algorithms to ensure strong consistency.

  • Simple: Curl accessible user's API (Http+json)
  • Security: Optional SSL client certificate authentication
  • Fast: Single Instance 1000 writes per second
  • Reliable: Use raft to ensure consistency

  • 1: Installation package:

      Yum Install Etcd-y
  • 2: Edit configuration:/etc/etcd/etcd.conf

    # [Member]
    etcd_name=192.168.56.110 #member节点名字 to correspond to the etcd_initial_cluster in the back.
    Etcd_data_dir= "/var/lib/etcd/default.etcd" #数据存储目录
    #ETCD_SNAPSHOT_COUNTER = "10000"
    #ETCD_HEARTBEAT_INTERVAL = "100"
    #ETCD_ELECTION_TIMEOUT = "1000"
    etcd_listen_peer_urls= "http://192.168.56.110:2380" #集群同步地址与端口
    etcd_listen_client_urls= "http://192.168.56.110:4001" #client通信端口
    #ETCD_MAX_SNAPSHOTS = "5"
    #ETCD_MAX_WALS = "5"
    #ETCD_CORS = ""
    #
    #[cluster]
    etcd_initial_advertise_peer_urls= "http://192.168.56.110:2380" #peer初始化广播端口
    etcd_initial_cluster= "192.168.56.110=http://192.168.56.110:2380,192.168.56.111=http://192.168.56.111:2380" # Cluster member, Format: $ node name =$ node synchronization port node before using "," separate
    Etcd_initial_cluster_state= "New" #初始化状态, will change to existing after initialization
    Etcd_initial_cluster_token= "Etcd-cluster" #集群名字
    etcd_advertise_client_urls= "http://192.168.56.110:4001" #client广播端口
    #ETCD_DISCOVERY = ""
    #ETCD_DISCOVERY_SRV = ""
    #ETCD_DISCOVERY_FALLBACK = "Proxy"
    #ETCD_DISCOVERY_PROXY = ""
    #
    #[proxy]
    #ETCD_PROXY = "Off"
    #
    #[security]
    #ETCD_CA_FILE = ""
    #ETCD_CERT_FILE = ""
    #ETCD_KEY_FILE = ""
    #ETCD_PEER_CA_FILE = ""
    #ETCD_PEER_CERT_FILE = ""
    #ETCD_PEER_KEY_FILE = ""

    In addition to the Etcd_initial_cluster project all nodes are consistent, the IP in the other configuration is the native IP
    ETCD configuration file does not support comment haha after each line, so in the actual configuration process, you need to delete the comments after each line #
  • 3: Start the service
      Systemctl Enable ETCD
    Systemctl Start ETCD
  • 4: Verify
     #etcdctl member list 
    Dial TCP 127.0.0.1:2379:connection refused
    #etcd默认连接127.0.0.1 2379 ports, and we 192.168.56.110 4001 Port
    # etcdctl-c 192.168.56.110:4001 member List
    No endpoints available
    #如果依然出现了上面的问题, See if the service starts
    # NETSTAT-LNP | grep etcd
    TCP 0 0 192.168.56.110:4001 0.0.0.0: LIST EN 18869/ETCD
    TCP 0 0 192.168.56.110:2380 0.0.0.0:
    LISTEN 18869/ETCD #然后查看端口是否畅通
    Telnet 192.168.56.111 4001
    Trying 192.168.56.111 ...
    Connected to 192.168.56.111.
    Escape character is ' ^] '.
    ^c
    # etcdctl-c 192.168.56.110:4001 member list
    10f1c239a15ba875:name=192.168.56.110 peerurls=http://192. 168.56.110:2380 clienturls=http://192.168.56.110:4001
    f7132cc88f7a39fa:name=192.168.56.111 peerURLs=http:// 192.168.56.111:2380 clienturls=http://192.168.56.111:4001
  • 5: Prepare
      #etcdctl-C 192.168.56.110:4001 mk/coreos.com/network/config ' {"Network": "10.0.0.0/16"} '
    {"Network": "10.0.0.0/16"}
    # etcdctl-c 192.168.56.110:4001 Get/coreos.com/network/config
    {"Network": "10.0.0.0/16"}

    The kubenetes behind this configuration will use the
2:kubenetes
  • 1: Control node Installation

    • 1: Package Installation
      Yum-y Install Kubernetes
    • 2: Configuration file:/etc/kubernetes/apiserver

        ###
      # kubernetes System Config
      #
      # The following values is used to configure the Kube-apiserver
      #

      # The address on the local server to listen.
      Kube_api_address= "--address=0.0.0.0"

      # The port is on the local server to listen.
      Kube_api_port= "--port=8080"

      # Port Minions Listen on
      Kubelet_port= "--kubelet_port=10250"

      # Comma separated list of nodes in the ETCD cluster
      #KUBE_ETCD_SERVERS = "--etcd_servers=http://127.0.0.1:4001"
      Kube_etcd_servers= "--etcd_servers=http://192.168.56.110:4001,http://192.168.56.111:4001"
      # Modify the ETCD service for our configuration

      # Address range to use for services
      Kube_service_addresses= "--portal_net=192.168.56.150/28"
      # outside the network segment, kubenetes to expose the service by changing networks

      # Default Admission Control policies
      Kube_admission_control= "--admission_control=namespaceautoprovision,limitranger,resourcequota"
      # ADD Your own!
      Kube_api_args= ""


      Kubenetse's configuration file does not support comments after each line, the actual production needs to delete the explanation after each line
    • 3: Start the service

      There is a problem with the API's startup script
      /usr/lib/systemd/system/kube-apiserver.service

        [Unit]
      Description=kubernetes API Server
      Documentation=https://github.com/googlecloudplatform/kubernetes
      [Service]
      Permissionsstartonly=true
      Execstartpre=-/usr/bin/mkdir/var/run/kubernetes
      Execstartpre=-/usr/bin/chown-r kube:kube/var/run/kubernetes/
      Environmentfile=-/etc/kubernetes/config
      Environmentfile=-/etc/kubernetes/apiserver
      User=kube
      Execstart=/usr/bin/kube-apiserver \
      $KUBE _logtostderr \
      $KUBE _log_level \
      $KUBE _etcd_servers \
      $KUBE _api_address \
      $KUBE _api_port \
      $KUBELET _port \
      $KUBE _allow_priv \
      $KUBE _service_addresses \
      $KUBE _admission_control \
      $KUBE _api_args
      Restart=on-failure
      limitnofile=65536
      [Install]
      Wantedby=multi-user.target

      Start the service
        Systemctl Enable Kube-apiserver Kube-controller-manager Kube-scheduler
      Systemctl Restart Kube-apiserver Kube-controller-manager Kube-scheduler
    • 4: Verify
        # PS aux | grep kube
      Kube 20505 5.4 1.6 45812 30808? SSL 22:05 0:07/usr/bin/kube-apiserver--logtostderr=true--v=0--etcd_servers=http://192.168.56.110:2380 , http://192.168.56.110:2380--address=0.0.0.0--allow_privileged=false--portal_net=192.168.56.0/24--admission_ Control=namespaceautoprovision,limitranger,resourcequota
      Kube 20522 1.8 0.6 24036 12064? SSL 22:05 0:02/usr/bin/kube-controller-manager--logtostderr=true--v=0--machines=127.0.0.1--master= http://127.0.0.1:8080
      Kube 20539 1.3 0.4 17420 8760? SSL 22:05 0:01/usr/bin/kube-scheduler--logtostderr=true--v=0--master=http://127.0.0.1:8080
      # Kubectl Cluster-info
      Kubernetes Master is running at http://localhost:8080
  • 2: Compute node Installation

    • 1: Package Installation
      Yum-y Install kubernetes Docker flannel bridge-utils net-tools
    • 2: Configuration file
      • /etc/kubernetes/config
        ###
        # kubernetes System Config
        #
        # The following values is used to configure various aspects of all
        # Kubernetes Services, including
        #
        # Kube-apiserver.service
        # Kube-controller-manager.service
        # Kube-scheduler.service
        # Kubelet.service
        # Kube-proxy.service
        # Logging to stderr means we get it in the SYSTEMD journal
        Kube_logtostderr= "--logtostderr=true"
        # Journal message level, 0 is debug
        Kube_log_level= "--v=0"
        # Should this cluster is allowed to run privileged Docker containers
        kube_allow_priv= "--allow_privileged=false"
        # How the Controller-manager, scheduler, and proxy find the Apiserver
        Kube_master= "--master=http://192.168.56.110:8080" #将改IP改为控制节点IP
      • /etc/kubernetes/kubelet
         # # # # 
        # kubernetes Kubelet (Minion) Config
        # The address for the info server to serve The
        kubelet_address= "--address=192.168.56.111" #本机地址
        # The port for the INF, set to 0.0.0.0 or ' for all interfaces ' o Server to serve on
        kubelet_port= "--port=10250"
        # I leave this blank to use the actual hostname
        Kubelet_ho Stname= "--hostname_override=192.168.56.111" #本机地址
        # Location of the Api-server
        kubelet_api_server= "--api_ servers=http://192.168.56.110:8080 "#控制节点地址
        # Add your own!
        kubelet_args= "--pod-infra-container-image=docker.io/kubernetes/pause:latest"
        #kubenet服务的启动需要依赖以pause这个镜像, The default kubenet is downloaded from the Google Image service, and because of * , the download is unsuccessful, and here we specify the image of the Docker
        #镜像下载: Docker pull Docker.io/kubernetes/pause
      • /etc/sysconfig/flanneld
        # Flanneld configuration options
        # ETCD URL location. Point the server where ETCD runs
        Flannel_etcd= "http://192.168.56.110:4001,http://192.168.56.111:4001" #修改为etcd服务地址
        # ETCD config key. This is the configuration key, that flannel queries
        # for Address range assignment
        flannel_etcd_key= "/coreos.com/network"
        # any additional options-want to pass
        #FLANNEL_OPTIONS = ""
    • 3: Service Modification

      Kubernetes Default service startup problem, need to write adjustment

      Cat/usr/lib/systemd/system/kubelet.service

        [Unit]
      Description=kubernetes Kubelet Server
      Documentation=https://github.com/googlecloudplatform/kubernetes
      After=docker.service
      Requires=docker.service
      [Service]
      Workingdirectory=/var/lib/kubelet
      Environmentfile=-/etc/kubernetes/config
      Environmentfile=-/etc/kubernetes/kubelet
      Execstart=/usr/bin/kubelet \
      $KUBE _logtostderr \
      $KUBE _log_level \
      $KUBELET _api_server \
      $KUBELET _address \
      $KUBELET _port \
      $KUBELET _hostname \
      $KUBE _allow_priv \
      $KUBELET _args
      limitnofile=65535
      limitnproc=10240
      Restart=on-failure
      [Install]
      Wantedby=multi-user.target

      Adjust your Docker network
        Systemctl Start Docker
      Systemctl Stop Docker
      Ifconfig Docker0 Down
      Brctl DELBR Docker0

      Start the service

        Systemctl Enable Kube-proxy Kubelet Flanneld Docker
      Systemctl Restart Kube-proxy kubelet Flanneld Docker
    • Verify
        # Kubectl Get Nodes
      NAME LABELS STATUS
      192.168.56.111 kubernetes.io/hostname=192.168.56.111 Ready
      192.168.56.112 kubernetes.io/hostname=192.168.56.112 Ready
3:kubernetes using 3.1 basic applications

Kubenetes management is actually for POD, RC, services management, the command line for Kubenetes management recommendations based on the configuration file, so easier to manage, but also more standardized

    Kubectl create-h
Create a resource by filename or stdin.
JSON and YAML formats are accepted.
Usage:
Kubectl create-f FILENAME [flags]
Examples:
Create a pod using the data in Pod.json.
$ kubectl create-f Pod.json
Create a pod based on the JSON passed to stdin.
$ Cat Pod.json | Kubectl Create-f-
    • Format specification:

        Kubectl api-versions
      Kind:replicationcontroller #Pod, Replicationcontroller, Service
      Metadata: #元数据, mostly name and label
      Name:test
      Spec: #配置, depending on the kind, the specific configuration items will be different
      ***

      Kubenetes support YAML or JSON file input, JSON with the API to handle the time is more convenient, Yaml is more friendly to people, the following in YAML format.
      A typical business structure is similar to this:
          +-----------+
      | |
      | Logic | #逻辑处理服务
      | |
      +---+--+----+
      | |
      +----+ +----+
      | |
      | |
      +----V-----+ +----V----+
      | | | |
      | DB | | Redis | #调用其他服务
      | | | |
      +----------+ +---------+

idea : Provide a complete set of services within each pod

  • 1: Prepare the mirror

    • Postgres: Database Mirroring
    • Redis: Caching Service Mirroring
    • WeChat: Service Image
  • 2:RC Configuration Wechat-rc.yaml:

      Apiversion:v1beta3
    Kind:replicationcontroller
    Metadata
    Name:wechatv4
    Labels
    Name:wechatv4
    Spec
    Replicas:1
    Selector
    Name:wechatv4
    Template
    Metadata
    Labels
    Name:wechatv4
    Spec
    Containers
    -Name:redis
    Image:redis
    Ports
    -containerport:6379
    -Name:postgres
    image:opslib/wechat_db
    Ports
    -containerport:5432
    -Name:wechat
    Image:opslib/wechat1
    Ports
    -CONTAINERPORT:80

    Import RC
      # Kubectl Create-f Wechat-rc.yaml
    Replicationcontrollers/wechat

    Confirm
    attached:
    In Docker, the link function can be used to connect the containers, but there is no such system in kubenetes, but because the same pod is a shared network storage related space, in the WeChat image in the configuration file, The IP in the configuration entry for the connection database and Redis can be written directly to ' 127.0.0.1 ', similar to this:
      sql_connection= ' Postgresql://wechat:[email protected]/wechat '
    Cached_backend= ' redis://127.0.0.1:6379/0 '
  • 3: Service Configuration Wechat-service.yaml
      Apiversion:v1beta3
    Kind:service
    Metadata
    Name:wechat
    Labels
    Name:wechat
    Spec
    Ports
    -PORT:80
    Selector
    Name:wechatv4

    Import
      # Kubectl Create-f Wechat-service.yaml
    Services/wechat

    View
      Kubectl Get service WeChat
    NAME LABELS SELECTOR IP (s) PORT (s)
    WeChat name=wechat name=wechatv4 192.168.56.156 80/tcp

    Confirm
      # curl-i http://192.168.56.156
    http/1.1 OK
    content-length:0
    Access-control-allow-headers:x-auth-token, Content-type
    server:tornadoserver/4.2
    Etag: "da39a3ee5e6b4b0d3255bfef95601890afd80709"
    Date:mon, Jul 09:04:49 GMT
    Access-control-allow-origin: *
    Access-control-allow-methods:get, POST, PUT, DELETE
    Content-type:application/json
3.2 Business Updates

After the basic business deployment is complete, when the service is to be updated, Kubenetes can take advantage of the rolling update, which basically realizes the hot update of the business.

#kubectl rolling-update wechatv3-f Wechatv3.yaml
Creating WECHATV4
At beginning of Loop:wechatv3 replicas:0, WECHATV4 replicas:1
Updating wechatv3 replicas:0, WECHATV4 replicas:1
At end of LOOP:WECHATV3 replicas:0, WECHATV4 replicas:1
Update succeeded. Deleting WECHATV3
Wechatv4
3.3 Application Management

When the same service needs to start multiple instances, the service itself, but the configuration of the startup service is not the same
In general we may have 3 requirements:

    • 1: Different container to set different resource permissions
    • 2: Different container mount different directories
    • 3: Different container to perform different start commands

You can set different settings for different container in the configuration file.

Apiversion:v1beta3
Kind:replicationcontroller
Metadata
Name:new
Labels
Name:new
Spec
Replicas:1
Selector
Name:new
Template
Metadata
Labels
Name:new
Spec
Containers
-Name:redis
Image:redis
Ports
-containerport:6379
-Name:postgres
image:opslib/wechat_db
Ports
-containerport:5432
-Name:wechat
Image:opslib/wechat1
Command: #container的启动命令有外部定义
-'/bin/bash '
-'-C '
-'/usr/bin/wechat_api '
-'--config=/etc/wechat/wechat.conf '
Resources: #限制container的资源
Request: #请求的资源
CPU: "0.5"
Memory: "512Mi"
Limits: #最大可以使用的资源
CPU: "1"
Memory: "1024Mi"
Ports
-CONTAINERPORT:80
Volumemounts: #挂载目录
-Name:data
Mountpath:/data
Volumes
-Name:data
Reference article:
    • Kubernetes System Architecture Introduction: Http://www.infoq.com/cn/articles/Kubernetes-system-architecture-introduction
    • ETCD: Key-value storage System for service discovery: Http://www.infoq.com/cn/news/2014/07/etcd-cluster-discovery
    • Kubenetes Deployment: http://blog.opskumu.com/k8s-cluster-centos7.html



Wen/harvey_l (author of Jane's book)
Original link: http://www.jianshu.com/p/40d171c3b950
Copyright belongs to the author, please contact the author to obtain authorization, and Mark "book author".

Kubernetes Study Notes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.