<span id="Label3"></p><p><p>Because Docker does not provide the ability of cluster management, the landing operation of a docker cluster is not very realistic, so it is necessary to introduce the management tools of container cluster, mainstream mesosphere marathon, Google kubernetes, The swarm of the Docker community, the most mature is Kubernetes. Kubernetes is the Google Open source container cluster Management system, which provides application deployment, maintenance, extension mechanisms and other functions, using Kubernetes can easily manage the cross-machine operation of containerized applications, the main functions are as Follows:</p></p> <ul> <ul> <li>Use Docker for application wrapping (package), instantiation (instantiate), and Run.</li> <li>Run and manage containers across machines in a clustered way.</li> <li>Resolve communication issues between Docker's Cross-machine Containers.</li> <li>Kubernetes's self-healing mechanism makes container clusters always run in the User's desired State.</li> </ul> </ul><p><p>Mesos is an open source project under apache, originally originating at the University of california, Berkeley Amplab. Through mesos, the physical resource cpu, memory, i/o, etc. of the whole data center can be integrated into a virtual resource pool, and then dynamically adjust the resource allocation according to the demand applied on it. Just as the operating system puts the processor and ram of a PC into a resource pool so that it can allocate and release resources for different processes, the Mesos is the distributed kernel for the entire cluster, in the data center operating system Concept. Mesos currently supports a variety of big data distributed applications, including hadoop, kafaka, and spark, in addition to supporting microservices applications, so we use Mesos as our unified resource scheduling Layer. Kubernetes is integrated into the Mesos as a framework, and through Mesos to get the underlying physical resources, we use kubernetes to manage the microservices applications in the container cluster. I'll show you how to install the deployment on a 3-node cluster, and how kubernetes integrates in mesos, and the kubernetes below will be ku8 proportionate to the Mesos. We need to install Dokcer on the server before deployment, please refer to my other share for installation Documentation.</p></p><p><p>I. Mesos installation<br>1.1 Environment Preparation</p></p> <ul> <ul> <li>Operating system version requires centos7.2</li> <li>Docker version requires more than 1.9</li> <li>Disable SELinux and IPTABLES/FIREWALLD before installation</li> </ul> </ul><p><p>1.2 Download the installation file<br>Install Mesos Yum source, This source is the RPM package format, directly installed on the line without Configuration.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs ruby">rpm -<span class="hljs-constant">Uvh</span><span class="hljs-symbol">http:</span>/<span class="hljs-regexp">/repos.mesosphere.com/el</span><span class="hljs-regexp">/7/noarch</span><span class="hljs-regexp">/RPMS/mesosphere</span>-el-repo-<span class="hljs-number">7</span>-<span class="hljs-number">1</span>.noarch.rpm</code></pre></pre><p><p>Install the Yum download tool</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs lasso"><span class="hljs-attribute">-y</span> yum<span class="hljs-attribute">-utils</span></code></pre></pre><p><p>Create an installation folder</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs lasso"><span class="hljs-attribute">-p</span> /opt/mesos<span class="hljs-attribute">-download</span>cd /opt/mesos<span class="hljs-attribute">-download</span> mkdir mesos marathon zookeeper</code></pre></pre><p><p>Execute the following command in turn to download the installation package</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs lasso">cd /opt/mesos<span class="hljs-attribute">-download</span><span class="hljs-subst">--</span>resolve mesoscd /opt/mesos<span class="hljs-attribute">-download</span><span class="hljs-subst">--</span>resolve marathon cd /opt/mesos<span class="hljs-attribute">-download</span><span class="hljs-subst">--</span>resolve mesosphere<span class="hljs-attribute">-zookeeper</span></code></pre></pre><p><p>finally, Copy the downloaded installation files to the other two nodes in the Cluster.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs ruby">scp -r /opt/mesos-download/* root<span class="hljs-variable">@xxxx</span><span class="hljs-symbol">:/opt/mesos-download/</span></code></pre></pre><p><p>1.3 Configuring the Host Name list<br><strong>Watch out! </strong>This step is critical, 3 nodes in the cluster need to be configured, we were the slave node did not configure the hosts file resulting in the master node is not found, it took a long time to find the Problem.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs cs">hostnamectl --<span class="hljs-keyword">static</span><span class="hljs-keyword">set</span>-hostname master</code></pre></pre><p><p>In this way the configuration does not restart the server, exit the client and then go in to see the Changes. The Hosts file is edited as Follows:<br></p></p><p><p>1.4 Installing Zookeeper</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs bash"><span class="hljs-built_in">cd</span><span class="hljs-built_in">sudo</span> yum localinstall -y *.rpm</code></pre></pre><p><p>Set The/var/lib/zookeeper/myid on the master node to 1 if multiple master nodes are installed, set to 2, and then 3. Configure the Zookeeper address information in the/etc/zookeeper/conf/zoo.cfg file as Follows:<br><br>Start Zookeeper</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs bash"><span class="hljs-built_in">sudo</span><span class="hljs-built_in">sudo</span> systemctl start zookeeper</code></pre></pre><p><p>Look at the zookeeper process can start normally, zookeeper start up is often because the network wall is not closed.<br></p></p><p><p>1.5 Install Master<br>Go to each folder to perform a local RPM installation</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs lasso">cd /opt/mesos<span class="hljs-attribute">-download</span><span class="hljs-attribute">-y</span><span class="hljs-subst">*</span><span class="hljs-built_in">.</span>rpm cd /opt/mesos<span class="hljs-attribute">-download</span><span class="hljs-attribute">-y</span><span class="hljs-subst">*</span><span class="hljs-built_in">.</span></code></pre></pre><p><p>Configure the Zookeeper address information in the Mesos configuration file/etc/mesos/zk, <strong>note! </strong>This step, if you forget to do it, will cause the slave node to find the master node behind it.<br><br>Set Mater punching Number: set the number In/etc/mesos-master/quorum to 1<br><br><strong>Watch out! </strong>in this case, because the number of nodes is limited to only one master, if it is in the production environment in order to avoid a single point of failure, should do 3 master node, and on each master node is installed zookeeper, so you can through the zookeeper election mechanism to do HA. When there were 3 zookeeper, the quorum value was 2, indicating that 2 votes were passed when the arbitration was Made.<br>Start Master node related services</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs bash"><span class="hljs-built_in">sudo</span><span class="hljs-built_in">sudo</span><span class="hljs-built_in">sudo</span><span class="hljs-built_in">sudo</span><span class="hljs-built_in">sudo</span> systemctl restart marathon</code></pre></pre><p><p><strong>Watch out! </strong>this way the master node does both the master node and the slave node, and if you want the node to do only master, you should close the slave process.<br>1.6 Installing Slave<br>Copy the installation folder of the master node/mesos-download to all slave nodes and proceed to install under the local installation package Folder.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs bash"><span class="hljs-built_in">cd</span><span class="hljs-built_in">sudo</span> yum localinstall -y *.rpm</code></pre></pre><p><p>Configure the Zookeeper address information in the Mesos configuration file/etc/mesos/zk, as Above.<br>Start the related service on the slave node and you need to turn off the master service for the slave node.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs bash"><span class="hljs-built_in">sudo</span><span class="hljs-built_in">sudo</span> systemctl stop mesos-master<span class="hljs-built_in">sudo</span><span class="hljs-built_in">sudo</span> systemctl restart mesos-slave</code></pre></pre><p><p>1.7 Mesos Installation Complete<br>Login Mesos Monitor interface to verify that the installation is Complete.<br>http://xxxxxx:5050<br><br>On the left you can see the tasks currently running, as well as the resource usage and idle resources of the Cluster.<br></p></p><p><p>Two. kubernetes Integrated Installation</p></p><p><p>2.1 Download the installation file<br>Running on the Mesos ku8 only the master node, there is no slave node, you can put the Ku8 master node in the cluster on any PLATFORM. The Ku8 installation file can be found on github, which is downloaded directly from the Git tool in this example.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs lasso">yum –y install gitmkdir –p /opt/kubernetes<span class="hljs-attribute">-download</span>cd /opt/kubernetes<span class="hljs-attribute">-download</span>git clone https:<span class="hljs-comment">//github.com/kubernetes/kubernetes</span></code></pre></pre><p><p>2.2 Installing the Go Language environment<br>In addition, because Ku8 is written in the Go language, the default system is not installed, so before compiling ku8 to install the Go language package.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs lasso">mkdir –p /opt/kubernetes<span class="hljs-attribute">-download</span>/golangcd /opt/mesos<span class="hljs-attribute">-download</span><span class="hljs-subst">--</span><span class="hljs-attribute">-y</span><span class="hljs-subst">*</span><span class="hljs-built_in">.</span>rpm</code></pre></pre><p><p>2.3 Installing the ETCD database</p></p><p><p>ETCD is a key-value pair database that ku8 to store service registration information when it makes application service Exposure. The ETCD installation package can be found on github, uploading the ETCD installation file to the KU8 master node and extracting it into the installation directory.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs avrasm">cd opt/etcd-v2<span class="hljs-number">.3</span><span class="hljs-number">.2</span><span class="hljs-keyword">cp</span> etcd etcdctl <span class="hljs-keyword">cp</span> etcd<span class="hljs-preprocessor">.service</span> /usr/lib/systemd/systemmkdir /var/lib/etcdmkdir /etc/etcdvi /etc/etcd/etcd<span class="hljs-preprocessor">.conf</span> </code></pre></pre><p><p>Modify the ETCD configuration file, according to this address to the master node of the IP on the line of the other do not change, pay attention to copy when the Chinese are removed, in case Garbled.</p></p><pre class="prettyprint"><code class=" hljs vala"><span class="hljs-preprocessor"><span class="hljs-preprocessor"># [member]</span></span>Etcd_name=etcd1<span class="hljs-preprocessor"><span class="hljs-preprocessor">#etcd Data Storage location</span></span>Etcd_data_dir=<span class="hljs-string"><span class="hljs-string">"/var/lib/etcd_data"</span></span> <span class="hljs-preprocessor"><span class="hljs-preprocessor">#集群内部监听url</span></span>etcd_listen_peer_urls=<span class="hljs-string"><span class="hljs-string">"http://10.255.242.170:2380"</span></span> <span class="hljs-preprocessor"><span class="hljs-preprocessor">#广播给集群内部其他成员url</span></span>etcd_initial_advertise_peer_urls=<span class="hljs-string"><span class="hljs-string">"http://10.255.242.170:2380"</span></span> <span class="hljs-preprocessor"><span class="hljs-preprocessor"># for external client side URL</span></span>etcd_listen_client_urls=<span class="hljs-string"><span class="hljs-string">"http://10.255.242.170:2379"</span></span><span class="hljs-preprocessor"><span class="hljs-preprocessor">#广播给外部客服端url</span></span>etcd_advertise_client_urls=<span class="hljs-string"><span class="hljs-string">"http://10.255.242.170:2379"</span></span><span class="hljs-preprocessor"><span class="hljs-preprocessor"># [cluster]</span></span><span class="hljs-preprocessor"><span class="hljs-preprocessor"># ETCD Cluster list etcd_initial_cluster= "etcd1=http://10.255.242.171:2380"</span></span>etcd_initial_cluster_token=<span class="hljs-string"><span class="hljs-string">"etcd-cluster"</span></span><span class="hljs-preprocessor"><span class="hljs-preprocessor"># Initial Cluster status</span></span>Etcd_initial_cluster_state=<span class="hljs-string"><span class="hljs-string">"new"</span></span> </code></pre><p><p>Start ETCD Background Service</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs sql"><span class="hljs-operator"><span class="hljs-keyword">start</span>/restart etcdsystemctl enable etcd </span></code></pre></pre><p><p>Check the ETCD running state, through the node information list you can see we now have a ETCD.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs lasso">etcdctl cluster<span class="hljs-attribute">-health</span><span class="hljs-built_in">list</span></code></pre></pre><p><p></p></p><p><p>2.4 Compiling Ku8<br>Go to the KU8 installation folder, and It's okay to perform a make test check after Compiling.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs bash"><span class="hljs-built_in">cd</span> /opt/kubernetes-download/kubernetes<span class="hljs-keyword">export</span> KUBERNETES_CONTRIB=mesosmakemake test</code></pre></pre><p><p>2.5 Configuring the Run environment variable<br>KU8 integrated installation without its own configuration files, all configuration parameters need to run the background service through the environment variables injected into, so we need to configure the environment variables in the System.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs bash">vi /etc/profile<span class="hljs-keyword">export</span><span class="hljs-keyword">export</span> KUBERNETES_MASTER=http://<span class="hljs-variable">${KUBERNETES_MASTER_IP}</span>:<span class="hljs-number">8888</span><span class="hljs-keyword">export</span> PATH=<span class="hljs-string">"/opt/kubernetes-download/kubernetes/_output/local/go/bin:<span class="hljs-variable">$PATH</span>"</span><span class="hljs-keyword">export</span> MESOS_MASTER=zk://<span class="hljs-number">10.1</span>.<span class="hljs-number">24.170</span>/mesos</code></pre></pre><p><p>After the configuration is finished, the following source is executed</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs bash"><span class="hljs-built_in">source</span> /etc/profile</code></pre></pre><p><p>2.6 Creating a mesos-cloud.conf file<br>Configured as follows, this file is mainly to tell Ku8 where to find Mesos master node, the address is Mesos master node or zookeeper Address.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs lasso">cd /opt/kubernetes<span class="hljs-attribute">-download</span>/kubernetesvi mesos<span class="hljs-attribute">-cloud</span><span class="hljs-built_in">.</span>conf<span class="hljs-preprocessor">[</span>mesos<span class="hljs-attribute">-cloud</span><span class="hljs-preprocessor">]</span><span class="hljs-markup">mesos-master = zk://10.1.24.24:2181/mesos</span></code></pre></pre><p><p>2.7 Configuring Docker<br>All the nodes that install Docker need to be changed, not just the KUB8 master Node.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs ">vi /etc/sysconfig/docker</code></pre></pre><p><p>These are the ones that are useful, others don't Move. Attention! This option must be added, otherwise ku8 will start Docker Failure.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs ruleslanguage"><span class="hljs-array">#OPTIONS</span>=<span class="hljs-string">‘--selinux-enabled‘</span>DOCKER_CERT_PATH=/etc/docker<span class="hljs-built_in">OPTIONS</span>=<span class="hljs-string">‘--exec-opt native.cgroupdriver=cgroupfs‘</span></code></pre></pre><p><p>Because KU8 requires the support of the Google pause plugin when scheduling docker, we need to log on to each of the docker-installed nodes and download the Google pause image from the public library.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs lua">docker pull docker.<span class="hljs-built_in">io</span>/google/pausedocker images</code></pre></pre><p><p></p></p><p><p>Look at the image ID of the new pause image and re-tag the image based on this ID.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs lua">docker tag imageid gcr.<span class="hljs-built_in">io</span>/google_containers/pause:<span class="hljs-number">2.0</span></code></pre></pre><p><p>Restart Docker</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs ">systemctl restart docker</code></pre></pre><p><p>2.8 Mesos Slave Node configuration<br>This operation allows kubernetes slave to use IP instead of hostname as IP information, avoiding the problem that Docker cannot communicate across Nodes.</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs lasso">cd /etc/mesos<span class="hljs-attribute">-slave</span><span class="hljs-attribute">-i</span><span class="hljs-subst">></span><span class="hljs-literal">false</span><span class="hljs-subst">></span> hostname_lookup systemctl restart mesos<span class="hljs-attribute">-slave</span></code></pre></pre><p><p>2.9 Starting the KU8 service<br>This is the most critical step, to start the ku8 of the three daemons, respectively, the API server, controller, scheduler, before starting the first echo under the ${kubernetes_master_ip} these environment variables, see the configuration is not, Then start again, If there is a failure to see the corresponding log file, such as API Server/var/log/kubenetes-mesos/apiserver.log.</p></p><pre class="prettyprint"><code class=" hljs lasso">Cd/opt/kubernetes<span class="hljs-attribute"><span class="hljs-attribute">-download</span></span>/kubernetesnohup miles Apiserver<span class="hljs-subst"><span class="hljs-subst">--</span></span>Address<span class="hljs-subst"><span class="hljs-subst">=</span></span>${kubernetes_master_ip}<span class="hljs-subst"><span class="hljs-subst">--</span></span>Etcd<span class="hljs-attribute"><span class="hljs-attribute">-servers</span></span><span class="hljs-subst"><span class="hljs-subst">=</span></span>http<span class="hljs-comment"><span class="hljs-comment">//${kubernetes_master_ip}:2379--service-cluster-ip-range=10.10.10.0/24--port=8888--cloud-provider=mesos--c Loud-config=mesos-cloud.conf--secure-port=0--v=1 >/var/log/kubenetes-mesos/apiserver.log 2>&1 & </span></span>Nohup miles Controller<span class="hljs-attribute"><span class="hljs-attribute">-manager</span></span> <span class="hljs-subst"><span class="hljs-subst">--</span></span>Master<span class="hljs-subst"><span class="hljs-subst">=</span></span>${kubernetes_master_ip}:<span class="hljs-number"><span class="hljs-number">8888</span></span> <span class="hljs-subst"><span class="hljs-subst">--</span></span>Cloud<span class="hljs-attribute"><span class="hljs-attribute">-provider</span></span><span class="hljs-subst"><span class="hljs-subst">=</span></span>Mesos<span class="hljs-subst"><span class="hljs-subst">--</span></span>Cloud<span class="hljs-attribute"><span class="hljs-attribute">-config</span></span><span class="hljs-subst"><span class="hljs-subst">=</span></span><span class="hljs-built_in"><span class="hljs-built_in">.</span></span>/mesos<span class="hljs-attribute"><span class="hljs-attribute">-cloud</span></span><span class="hljs-built_in"><span class="hljs-built_in">.</span></span>Conf<span class="hljs-subst"><span class="hljs-subst">--</span></span>V<span class="hljs-subst"><span class="hljs-subst">=</span></span><span class="hljs-number"><span class="hljs-number">1</span></span> <span class="hljs-subst"><span class="hljs-subst">></span></span>/<span class="hljs-built_in"><span class="hljs-built_in">var</span></span>/<span class="hljs-keyword"><span class="hljs-keyword">Log</span></span>/kubenetes<span class="hljs-attribute"><span class="hljs-attribute">-mesos</span></span>/controller<span class="hljs-built_in"><span class="hljs-built_in">.</span></span><span class="hljs-keyword"><span class="hljs-keyword">Log</span></span> <span class="hljs-number"><span class="hljs-number">2</span></span><span class="hljs-subst"><span class="hljs-subst">>&</span></span><span class="hljs-number"><span class="hljs-number">1</span></span> <span class="hljs-subst"><span class="hljs-subst">&</span></span>Nohup Miles Scheduler<span class="hljs-subst"><span class="hljs-subst">--</span></span>Address<span class="hljs-subst"><span class="hljs-subst">=</span></span>${kubernetes_master_ip}<span class="hljs-subst"><span class="hljs-subst">--</span></span>Mesos<span class="hljs-attribute"><span class="hljs-attribute">-master</span></span><span class="hljs-subst"><span class="hljs-subst">=</span></span>${mesos_master}<span class="hljs-subst"><span class="hljs-subst">--</span></span>Etcd<span class="hljs-attribute"><span class="hljs-attribute">-servers</span></span><span class="hljs-subst"><span class="hljs-subst">=</span></span>http<span class="hljs-comment"><span class="hljs-comment">//${kubernetes_master_ip}:2379--mesos-user=root--api-servers=${kubernetes_master_</span></span>IP}:<span class="hljs-number"><span class="hljs-number">8888</span></span> <span class="hljs-subst"><span class="hljs-subst">--</span></span>Cluster<span class="hljs-attribute"><span class="hljs-attribute">-dns</span></span><span class="hljs-subst"><span class="hljs-subst">=</span></span><span class="hljs-number"><span class="hljs-number">10.10</span></span><span class="hljs-number"><span class="hljs-number">. Ten</span></span><span class="hljs-number"><span class="hljs-number">. Ten</span></span> <span class="hljs-subst"><span class="hljs-subst">--</span></span>Cluster<span class="hljs-attribute"><span class="hljs-attribute">-domain</span></span><span class="hljs-subst"><span class="hljs-subst">=</span></span>Cluster<span class="hljs-built_in"><span class="hljs-built_in">.</span></span><span class="hljs-built_in"><span class="hljs-built_in">Local</span></span> <span class="hljs-subst"><span class="hljs-subst">--</span></span>V<span class="hljs-subst"><span class="hljs-subst">=</span></span><span class="hljs-number"><span class="hljs-number">2</span></span> <span class="hljs-subst"><span class="hljs-subst">></span></span>/<span class="hljs-built_in"><span class="hljs-built_in">var</span></span>/<span class="hljs-keyword"><span class="hljs-keyword">Log</span></span>/kubenetes<span class="hljs-attribute"><span class="hljs-attribute">-mesos</span></span>/scheduler<span class="hljs-built_in"><span class="hljs-built_in">.</span></span><span class="hljs-keyword"><span class="hljs-keyword">Log</span></span> <span class="hljs-number"><span class="hljs-number">2</span></span><span class="hljs-subst"><span class="hljs-subst">>&</span></span><span class="hljs-number"><span class="hljs-number">1</span></span> <span class="hljs-subst"><span class="hljs-subst">&</span></span></code></pre><p><p>2.10 Verify Boot OK<br>Execute the KU8 command to query the currently deployed Pod (the concept of the Docker Group)</p></p><pre class="prettyprint"><pre class="prettyprint"><code class=" hljs cs"><span class="hljs-keyword">get</span> pod</code></pre></pre><p><p><br>Enter the Mesos monitoring interface to see the KU8 framework running under the Framework tab</p></p><p><p></p></p> <p><p>Mesos+kubernetes Integrated Installation Deployment</p></p></span>
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.