MicroServices under Golang-part 8th-Kubernetes and container engines

Source: Internet
Author: User
Tags postgres port gcloud k8s
This is a creation in Article, where the information may have evolved or changed. [Previous post] (https://studygolang.com/articles/12799) We looked at creating a container engine cluster with [TerraForm] (https://terraform.io/). In this blog post, we look at deploying containers into clusters using the container engine and [Kubernetes] (https://kubernetes.io/). # # Kubernetes First, what is [Kubernetes] (https://kubernetes.io/)? [Kubernetes] (https://kubernetes.io/) is an open-source, managed container framework. Regardless of the platform, it means that you can run it on your own computer, on AWS or Google Cloud, on any other platform. (Kubernetes) allows you to control a set of containers, and the network rules of a container, by using declarative configuration content. You just need to write a Yaml/json file that describes which container to run under. Define your network rules, such as port forwarding. It will help you manage service discovery. Kubernetes is an important addition to cloud scenarios and is now rapidly becoming the real choice for cloud container management. So it is better to understand. So let's get started! First, make sure you have installed the Kubectl CLI locally: ' $ gcloud Components install Kubectl ' Now make sure you are connected to the cluster and that the authentication is correct. The first step is to log in to ensure that it has been certified. In the second step we set up the project configuration to make sure we use the correct project ID and accessible area. "$ echo" This command would open a Web browser, and would ask you to login$ gcloud auth application-default login$ Gcloud Config set project shippy-freight$ gcloud config set Compute/zone eu-west2-a$ echo "Now generate a security token and ACCE SS to your KB cluster "$ gcloud container clusters get-credentials shippy-freight-cluster '" In the above command, you canTo replace Compute/zone with any area you choose, your project ID and cluster name can be different from mine. Here is a general description ... ' $ echo ' This command would open a Web browser, and would ask you to login$ Gcloud auth application-default Log in$ gcloud config set project <project-id>$ gcloud config set Compute/zone <availability-zone>$ echo "now gene Rate a security token and access to your KB cluster "$ gcloud container clusters get-credentials <cluster-name>" dot this You You can see the project id...! [] (https://raw.githubusercontent.com/studygolang/gctt-images/master/go-micro/ Screen-shot-2018-03-17-at-17.55.41.png) now find our project id...! [] (https://raw.githubusercontent.com/studygolang/gctt-images/master/go-micro/ screen-shot-2018-03-17-at-17.56.35.png) cluster zone Region/zone and cluster name can be found in the ' Computeengine ' in the upper left corner of the menu and then select ' VM Instances '. You can see your Kubernetes VMS, point in to see more details, and see everything that's relevant to your cluster. If you run under ... ' $ kubectl get pods ' you will see ... ' No resources found. It doesn't matter, we haven't deployed anything yet. We can think about what we need to actually deploy. We need a Mongodb instance. In general, we deploy a MongoDB instance, or for a complete separation, put the DB instance together with each service. But in this case, we play a little bit smarter, using a centralized example. This is a single point of failure, but in realscenarios, you should consider deploying the DB instance separately and maintaining consistency with the service. But we can do that. Then I need to deploy the service, vessel service, User Service, consignment service and email service. All right, that's easy! Let's start with the Mongodb instance. Because it does not belong to a single service, and this is part of the overall platform, we put these deployments under the Shippy-infrastructure warehouse. This warehouse I submitted to Github because it contains a lot of sensitive data, but I can give you all the deployment files. First, we need a configuration to create an SSD for long-term storage. This will not lose data when we restart the container. "'//shippy-infrastructure/deployments/mongodb-ssd.ymlkind:storageclassapiversion:storage.k8s.io/ V1BETA1METADATA:NAME:FASTPROVISIONER:KUBERNETES.IO/GCE-PDPARAMETERS:TYPE:PD-SSD "Then is our deployment file (we'll go into more detail through this article) ... "'//Shippy-infrastructure/deployments/mongodb-deployment.ymlapiversion:apps/v1beta1kind:statefulsetmetadata: Name:mongospec:serviceName: "Mongo" Replicas:3selector:matchlabels:app:mongotemplate:metadata:labels:app: mongorole:mongospec:terminationgraceperiodseconds:10containers:-name:mongoimage:mongocommand:-mongod-"-- Replset "-rs0-"--smallfiles "-"--noprealloc "-"--bind_ip "-" 0.0.0.0 "ports:-containerport:27017volumemounts:-Name: Mongo-persistent-storagemountpath:/data/db-name:mongo-sidecarimage:cvallance/mongo-k8s-sidecarenv:-name:mongo_sidecar_pod_labelsvalue: "Role=mongo,environment=test" volumeclaimtemplates:-metadata:name:mongo-persistent-storageannotations:volume.beta.kubernetes.io/ Storage-class: "Fast" spec:accessmodes: ["Readwriteonce"]resources:requests:storage:10gi "and then the service file ..." apiversion:v1kind:servicemetadata:name:mongolabels:name:mongospec:ports:-Port:27017targetport:27017clusterip: Noneselector:role:mongo ' There's a lot more, and it probably doesn't make any sense to you now. So let's try to clarify some of the key concepts of Kubernetes. # # Nodes ##**[Read this article] (https://kubernetes.io/docs/concepts/architecture/nodes/) **nodes is your physical machine or VM, your container is clustered through node, The service accesses each other through a set of group containers running on different node/pod. # # Pods ##**[Read this article] (https://kubernetes.io/docs/concepts/workloads/pods/pod/) **pod is a set of related containers. For example, a pod can contain your authentication service container, user database container, login registration user interface, and so on. These containers are obviously relevant. Pod allows you to group them together so that they can access each other and run in the same instant network environment as you can treat them as a whole. This is cool! Pods are very difficult to understand in Kubernetes. # # Deployment ##**[Read this article] (https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) **deployment isTo control the state, a deployment is the description of the final output and the state to be persisted. A deployment is an introduction to Kubernetes, for example, I want three containers, run on three ports, with some environment variables. Kubernetes will ensure that this condition is maintained. If a container crashes, leaving two containers, it will start a requirement that satisfies three containers. # # Statefulset ##**[Read this article] (https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) **stateful set and Deployment has a similar place, except that it will use some storage methods to keep the container related state. such as the concept of distributed storage. In fact, Mongodb writes data to the binary data storage format, as many databases do. Create a recyclable database instance, such as a Docker container. If the container restarts the data is lost. In general, you need to load data/files using a split volume when the container is started. You can do these deployments on the Kubernetes. But statefulsets, there are some additional automated operations at the associated cluster points. So this is a natural fit for the MongoDB container. # # Service ##**[read this article] (https://kubernetes.io/docs/concepts/services-networking/service/) * * Service is a set of network-related rules, such as port forwarding and DNS Rules, connect your pods at the network level, control who and who can communicate, and who can be accessed externally. There are two kinds of services you might encounter, one is load balancer, the other is node port. Load Balancer, is a polling load balancer that can give you the option to create an IP address to the proxy node. Expose the service to the outside through the agent. Node port exposes pods to the upper-level network environment so that they can be accessed by other services, internal pod/instances. This is useful for exposing node to other pods. This is how you can allow services and other services to communicate. This is the essence of service discovery. At least part of it. Now we have just read a little bit of Kubernetes content, we will talk about more, and then dig digging. It is important to note that if you are using Docker on this machine, for example if you are using the edge version of Docker on Mac/windows, you can put the KThe Ubernetes cluster is pinned on this machine. Test time is very useful. So we've created three files, one for storage, one for stateful set, and one for our service. The end result is a copy of the MongoDB container, stateful storage and a service reserved to the data store through the pod. We continue to look at, create, in the correct order, because some operations are required to rely on the previously created content. "Echo" Shippy-infrastructure "$ kubectl create-f./deployments/mongodb-ssd.yml$ kubectl create-f./deployments/ mongodb-deployment.yml$ kubectl create-f./deployments/mongodb-service.yml "Wait a few minutes, you can check the status of the MongoDB container, run: ' $ Kubectl get pods ' ' You may notice that your pod status is ' pending '. If you run ' $ kubectl describe node ' you will see an error about CPU insufficiency. It is embarrassing that some cluster management and Kubernetes tools are sensitive to the CPU. So a node may not be enough, and MONGO instances are the same. Then we turn on automatic expansion to the cluster, which is a pool by default. To achieve this, you need to go to Google Cloud Console, select the Kubernetes engine, edit your instance, turn on auto-expansion, set the minimum and maximum values to 2, and then click Save. [] (https://raw.githubusercontent.com/studygolang/gctt-images/master/go-micro/ Screen-shot-2018-03-17-at-20.36.17.png) in a few minutes, your node will be expanded to two, run ' $ kubectl get pods ' you will see ' containercreating '. Until all the containers are running in the desired way. Now that we have a database cluster, an auto-expanding Kubernetes engine, let's deploy some services! # # # Vessel Service # #vessel service is lightweight, does not do too much, and has no reliance, so it is suitable for beginners. First, let's change some code snippets on the vessel service slightly. "'//shippy-vessel-service/mAin.goimport (... k8s "Github.com/micro/kubernetes/go/micro") func main () {...//Replace existing service var with ... s RV: = k8s. NewService (micro. Name ("Shippy.vessel"), Micro. Version ("latest")} "all the things we do here are using the Import new library ' k8s." NewService ' covers the existing ' micro. NewService () '. So what's the new library? # # # of MicroServices in Kubernetes # #我喜欢 a point in micro, because it is built with a deep understanding of cloud and can always adapt to new technologies. Micro attaches great importance to Kubernetes, thus creating a micro [Kubernetes Library] (https://github.com/micro/kubernetes/). The reality is that all the libraries are actually micro, configured with some reasonable default values for Kuberntes, and a service selector that is directly integrated on the Kubernetes service. In other words, it gave the service discovery to Kubernetes. By default, GRPC is used as the default transport. Of course you can also use environment variables and plugin to override these states. There are a lot of fascinating features in the Micro world, and that's where I'm excited. Be sure to join [Slack Channel] (http://slack.micro.mu/). Now we're creating a deployment service on the service, where we'll get a little bit more detail on the role of each part. "'//Shippy-vessel-service/deployments/deployment.ymlapiversion:apps/v1beta1kind:deploymentmetadata:name: Vesselspec:replicas:1 Selector:matchLabels:app:vessel Template:metadata:labels:app:vessel spec:containers:-nam E:vessel-service image:eu.gcr.io/<pRoject-name>/vessel-service:latest imagepullpolicy:always command: ["./shippy-vessel-service", "--selector= Static ","--server_address=:8080 ",] env:-Name:db_host value:" mongo:27017 "-name:updated_at value:" Mon Mar 2018 12:05:58 GMT "Ports:-containerport:8080 Name:vessel-port" This has a lot of content, but I'm trying to break down the next part. First you can see ' kind:deployment ', there are many different ' things ' in Kubernetes, most of them can be considered ' cloud primatives '. In programming languages, String,interger,struct,method and so on, these types are metadata. Think of cloud as the same meaning in Kubernetes. So consider these as metadata. In this way, a deployment is a form of control meta data. The meta control ensures that your desired status is maintained. Deployment is a form of stateless control that is unsustainable and data is destroyed after a reboot or exit. Stateful sets similar to deployment, except that they want to maintain some static data, as well as the previously declared data. However, our services should not contain any state, and microservices are stateless. So in this we need a deployment. Next you need a standard partition, boot with deployment metadata, name, how many such pods (replicas) need to be kept (if one of them is dead, assuming we use more than one, the control element's job is to check the number of pods that are running is what we want, If you are not in the expected state, start another one). Selector and template expose some of the pod's metadata, allowing other services to discover and connect to pods. Then you need another standard partition (a bit confusing, but keep looking down!) )。 This time we are giving our own containers, or splitting volumes, sharing meta data, and so on. In this service, we need to start a separate container. The container area is an array because we want to launch several containers as part of the pod. is to assemble related containers. The Met of the containerAdata at a glance, we start a Docker container from the mirror, set some environment variables, pass in some commands at run time, expose a port (for service lookups). You see I passed in a new command: '--selector=static ', which is telling Kubernetes microservices settings to use Kubernetes for service discovery and load balancing. Really cool, because now your microservices code is directly interacting with Kubernetes powerful DNS, network, load balancer and service discovery. You can submit this option and continue to use microservices as before. But we can also use the benefits of Kubernetes. You also noticed that we pulled the mirror from a private repository. When using Google's container tool, you can get a container to register, to create your container image, push it down, like this ... "$ docker build-t eu.gcr.io/<your-project-name>/ Vessel-service:latest. $ gcloud Docker--Push Eu.gcr.io/<your-project-name>/vessel-service:latest "now look at our services ... "//Shippy-vessel-service/deployments/service.ymlapiversion:v1kind:servicemetadata:name:vessel Labels:app: Vesselspec:ports:-port:8080 protocol:tcp Selector:app:vessel "Here, as previously stated, we have a ' kind ', in this case is a service element (essentially a set of network-level DNS and firewall rules). Then we give service names and tags. Spec allows us to define a port for the service, and you can also define a ' targetport ' in this to find a specific container. But fortunately there is kubernetes/micro implementation, we can automatically operate. Finally, the most important part of selector must match your target pod, otherwise the service cannot find anything through the agent and will not work. Now let's deploy the changes in the cluster. "$ kubectl create-f./deployments/deployment.yml$ kubectl create-f./deploymEnts/service.yml ' Wait a few minutes and then run ... ' $ kubectl get pods$ kubectl Get services ' ' You should be able to see your new pod, new service up. Make sure they are running in the desired condition. If you encounter an error, you can run ' $kubectl proxy ' and then open ' Http://localhost:8001/ui ' in the browser and look at the Kubernetes UI, you can deep explore the status of the next container waiting. It's worth mentioning here that deployment is atomic and immutable, meaning that they have to be updated in some way to be modified. They have a unique hash value, and if the hash value does not change, deployment is not updated. If you run ' $ kubectl replace-f./deployments/deployment.yml ', nothing will happen. Because Kubernetes has not detected a change. There are many ways to avoid this, but it is important to note that in most cases your container will change, so do not use the ' latest ' tag, you should give each container a unique label, such as a compile number, such as: ' vessel-service:< Build-no> '. This is marked as a modification, and deployment can be replaced. But in this tutorial, we do something fun, but be careful, this is a lazy way of writing, and is not the best practice. I created a new file ' Deployments/deployment.tmpl ' as a template for deployment. Then I set an environment variable ' updated_at ', with a value of ' {{updated_at}} '. I updated Makefile to open the template file, set the current date/time with the environment variable, and then output it to the final deployment.yml file. It's a bit of a non-normative feeling, but it's just a temporary thing. I've seen a lot of ways, and you feel right how to do it. "//shippy-vessel-service/makefiledeploy:sed" s/{{updated_at}}/$ (shell date)/g "./deployments/deployment.tmpl >./deployments/deployment.ymlkubectl replace-f./deployments/deploymEnt.yml "Well, we succeeded, deployed a service that ran as we thought. I do the same for other services now. I made a brief update to each service in the warehouse, as follows ... [Consignment Service] (Https://github.com/EwanValentine/shippy-consignment-service) [Email Service] (Https://github.com/EwanValentine/shippy-email-service) [User Service] (Https://github.com/EwanValentine/shippy-user-service) [Vessel Service] (Https://github.com/EwanValentine/shippy-vessel-service) [UI] (Https://github.com/EwanValentine/shippy-ui) to our User Service deployment Postgres ... "Apiversion:apps/v1beta2kind: StatefulSetmetadata:name:postgresspec:serviceName:postgres selector:matchLabels:app:postgres replicas:3 Template: Metadata:labels:app:postgres role:postgres spec:terminationgraceperiodseconds:10 Containers:-name:postgres image : Postgres Ports:-Name:postgres containerport:5432 volumemounts:-name:postgres-persistent-storage MountPath:/var/l Ib/postgresql/data volumeclaimtemplates:-metadata:name:postgres-persistent-storage Annotations: Volume.beta.kubernetes.io/storage-class: "Fast" spec:accessmoDes: ["readwriteonce"] resources:requests:storage:10gi "' Postgres service ... ' ApiVersion:v1kind:Servicemetadata:name: Postgres labels:app:postgresspec:ports:-name:postgres port:5432 targetport:5432 clusterip:none selector:role:p Ostgres ' Postgres storage ... ' Kind:storageclassapiversion:storage.k8s.io/v1beta1metadata:name:fastprovisioner: KUBERNETES.IO/GCE-PDPARAMETERS:TYPE:PD-SSD ' # # Deployment Micro '//shippy-infrastructure/deployments/ Micro-deployment.ymlapiversion:apps/v1kind:deploymentmetadata:name:microspec:replicas:3 Selector:matchlabels: App:micro template:metadata:labels:app:micro spec:containers:-Name:micro image:microhq/micro:kubernetes args:- "API"-"--HANDLER=RPC"-"--namespace=shippy" env:-name:micro_api_address value: ":" Ports:-containerport:80 Name : Port ' is now a service ... '//Shippy-infrastructure/deployments/micro-service.ymlapiversion:v1kind:servicemetadata:name : Microspec:type:LoadBalancer Ports:-Name:api-http port:80 TargetPort: "Port" Protocol:tcp Selector:app:micro "in these services, we used a ' loadbalancer ' type, exposing an external load balancer, provided to an external IP address. If you run ' $ kubectl get services ', wait a two minutes (you'll see ' pending ' for a while) and you'll have this IP address. This is the public part, you can assign a domain name. Once the deployment is complete, let the service call Micro: "' $ curl localhost/rpc-xpost-d ' {" request ": {" name ":" Test "," Capacity ": $," max_weight ": 10000 0, "Available": true}, "method": "Vesselservice.create", "service": "Vessel"} '-H ' content-type:application/json ' " You will see a return ' created:true '. Super simple! This is your GRPC service, being proxied and turned into a web-friendly format that uses a shard of MongoDB instances. It didn't cost a lot! # # Deployment UI Service deployment Well, let's deploy the following user interface '//shippy-ui/deployments/deployment.ymlapiversion:apps/v1beta1kind: Deploymentmetadata:name:uispec:replicas:1 Selector:matchLabels:app:ui Template:metadata:labels:app:ui Spec:con Tainers:-Name:ui-service image:ewanvalentine/ui:latest imagepullpolicy:always env:-Name:updated_at value: "Tue 20 Mar 2018 08:26:39 GMT "ports:-containerport:80 Name:ui" is now a service ... "//SHIPPY-UI/DEPLOYMENTS/SERVICE.YMLAPIVERSION:V 1kind: Servicemetadata:name:ui labels:app:uispec:type:LoadBalancer Ports:-port:80 protocol:tcp targetport: ' UI ' Selec Tor:app:ui ' Note that the service is load balanced on port 80 because it is a public user interface, which is how users interact with our services. You can see it at once! # # Finally, we succeeded, managed our containers with Docker containers and Kubernetes, and successfully deployed the entire project to the cloud. I hope you can find some useful content from this article, do not feel too bad digestion. In the next section of this series, we'll look at the link between all of this and the CI process to manage our deployment. If you think this series of articles is useful, if you use ad blocker (not your fault). Please consider giving me a reward for my hard work. Share! [Https://monzo.me/ewanvalentine] (Https://monzo.me/ewanvalentine) or, sponsor me on [Patreon] (https://www.patreon.com/ewanvalentine).

via:https://ewanvalentine.io/microservices-in-golang-part-8/

Author: Ewan Valentine Translator: arisaries proofreading: polaris1119

This article by GCTT original compilation, go language Chinese network honor launches

This article was originally translated by GCTT and the Go Language Chinese network. Also want to join the ranks of translators, for open source to do some of their own contribution? Welcome to join Gctt!
Translation work and translations are published only for the purpose of learning and communication, translation work in accordance with the provisions of the CC-BY-NC-SA agreement, if our work has violated your interests, please contact us promptly.
Welcome to the CC-BY-NC-SA agreement, please mark and keep the original/translation link and author/translator information in the text.
The article only represents the author's knowledge and views, if there are different points of view, please line up downstairs to spit groove

670 Reads
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.