0. Catalogue
Overall architecture directory: ASP. NET core distributed project-Directory
k8s Schema Catalog: Kubernetes (K8S) cluster deployment (K8S Enterprise Docker container cluster Management) series catalog
This article reads the catalogue:
1. Chatting
2. Introduction to the deployment process
3. Deploy Harbor Warehouse
4, Jenkins-slave Mirror construction
5. Deploy Jenkins
6, Jenkins+gitlab Hook
7. k8s Deploying ASP. NET Core Project
First, chat
Now basically is to write an article every few days, or is not diligent enough, hard to make themselves into a diligent person, the work above the technical points used to share to everyone. It took me a week to study the K8s Automation deployment of the ASPNET core project, and the process was painful, but the results were good.
If the following are insufficient, please indicate that I will correct the first time. Thank you.
Ii. Introduction of the deployment process
1, first on the hand-drawn map, the general flowchart is as follows: (do not like to spray it)
The general deployment process is this: the developer pushes the good ASP. NET core project code through git to Gitlab, then Jenkins passes the Gitlab Webhook (provided it is configured) and automatically pulls the replacement code from the pull Gitlab. Then build, compile, build the image, then push the image to the harbor repository, and then, at deployment time, create the container and service by K8s Harbor the code above, and the final release is complete, and then it can be accessed by the extranet. (PS: Look at me to say so simple, but there are many small deer in the heart of the collision, no matter, the following will be good to share to everyone)
Of course, the above is only sketchy, please look more image.
2. First introduce my server IP and the content installed on each server
PS: Because my Computer configuration is limited, too many virtual opportunities to run the memory space is not enough, so I got three, is the limit.
If everyone is k8s how to deploy, please see my previous article has introduced. The premise is to install the Docker environment, Gitlab and so on.
Ip |
Role |
192.168.161.151 |
Master1, Harbor, Jenkins |
192.168.161.152 |
Node1 |
192.168.161.153 |
Node2, Gitlab |
Iii. deployment of Harbor warehousesFirst step: Download Harbor binaries: github.com/goharbor/harbor/releases Step two: Install Docker compose
Command:
sudo curl-l github.com/docker/compose/releases/download/1.22.0/docker-compose-$ (uname-s)-$ (uname-m)-o/usr/local/ Bin/docker-compose
Then set the downloaded Docker-compose to execute permissions
Command: chmod +x/usr/local/bin/docker-compose
The third step: here should be set since the visa book, that is, access to the time of the use of HTTPS access. Omitted here, does not affect our next deployment. (later, there will be an article on the self-visa book, for reference only) step fourth: Upload the downloaded Harbor binary package to the server and extract it.
The extracted command is: Tar XZVF package Name
Fifth step: Into the extracted folder harbor, there are the following files.
configuration file, vi harbor.cfg
Change the hostname to: Master1 IP address.
Then modify the login password of Harbor: For the sake of convenience, I changed to 123456, you can modify it yourself
Sixth step: Open Harbor in the current folder
Execute command:
./prepare
./install.sh (It will take a while to run here, please wait)
Seventh Step: Start success, check (perfect Run)
Docker-compose PS
Look in the browser (the following appears, representing success, perfect)
After entering into the inside, I first created a user Louie in user management. We create it for you to push some dependent images to the harbor warehouse in the future. Then create the project in the project, as follows:
The project that I created, and I added the user I just created in each project to facilitate late login and push the image
Here is a look at my project, coresdk mainly used to store the sdk;ops of the ASP. Projectdemo is mainly used for storing the Jenkins image and Jenkins-slave image; The main store for my Asp.netcore The image of the project, for k8s pull.
At this point, the harbor deployment is complete.
Four, Jenkins-slave mirror construction
Operating server: Node1
Description: Jenkins-slave is mainly to share the pressure of Jenkins-master. As shown: (Can be used when multitasking is running)
1, in order to create a jenkins-slave image, I have prepared three files
Dockerfile: Building a Jenkins-slave image
Jenkins-slave:shell scripts (which need to be added to executable permissions chmod +x jenkins-slave) are needed when mirroring is built.
Slave.jar: Startup script
The contents of the Dockerfile file are as follows (if you want the source file to be added to the QQ group), as shown in: You can see that jenkins-slave needs to rely on Java for some environments.
2. Configure the basic Java environment.
Configure the JDK and Maven to place the downloaded binaries in the following directory (as the file is too large to be downloaded in the group)
Apache-maven-3.5.3-bin.tar.gz extracted to the address of/usr/local/maven inside.
Jdk-8u45-linux-x64.tar.gz extracted to the address of/usr/local/jdk inside.
3. After the environment is ready, just build the image as follows:
Execute command:
After the build is complete, push the image to the harbor repository.
You need to log in to the harbor warehouse during the push process
Execute command: Docker login 192.168.161.151
After execution found that the timely input of the account and password can not log in, because in Node1 not add the Harbor warehouse IP address resulting in the inability to log in, so
Execute command: Vi/etc/docker/daemon.json
Add a portion of the red circle and restart Docker.
Perform Docker info to see if the IP address is active and found to have joined. OK, try logging in, find the login successful, and start pushing the push.
Of course, the harbor also gives a mirrored push statement
Start push: Docker push 192.168.161.151/ops/jenkins-slave
At this point, the jenkins-slave image has been pushed to completion.
v. Deployment of Jenkins
When I deployed Jenkins, I mounted the data volume of Jenkins in PV/PVC and NFS.
1, need to prepare documents have
Jenkins-service-account.yml:jenkins Service Account creation
JENKINS.YML: Create containers and service services. To be accessible.
Dockerfile: Used primarily to generate a Jenkins image.
Registry-pull-secret.yaml: Primarily used for deployment when you can directly log in to the harbor repository to pull the image (needed to deploy Jenkins)
2. Start it.
Switch to the NODE1 server and build the previously prepared Dockerfile file.
Command: Docker build-t 192.168.161.151/ops/jenkins:lts-alpine.
Then push to the Harbor Repository, command: Docker push 192.168.161.150/ops/jenkins:lts-alpine
3. Switch to the master server
Build jenkins-service-account.yml and Jenkins.yml files and Registry-pull-secret.yaml
Pay special attention to the namespace in Registry-pull-secret.yaml, need to be created early k8s, and the 64-bit authentication information in data (need to log in harbor Warehouse generated information paste here)
After executing the above file, command: Kubectl create-f file name.
Generated as follows, it is found that Jenkins is already running and is running on a 153 node. Then start using the browser to access the
Looking at the service, Jenkins's external access port is 30001.
4. Input Access Address: http://192.168.161.153:30001/
You need a password to unlock Jenkins the first time you log in and follow the prompts to get the password.
Then select "" Plug-in to install ", if you need a special plug-in direct selection, otherwise directly installed.
5. To connect Jenkins to the k8s, you need to install several plugins
Open System Management = "Management plug-in" and install
Kubernetes continuous Deploy, Kubernetes, Gitlab Hook, Gitlab, Build Authorization Token
6, after the installation of the project began to build.
I create a new project, then select Pipelining, then click OK.
7, after the creation of the task is not configured first, we have to set up the Jenkins Hook k8s environment
Click on "System Management" = "System settings". Swipe down, click on "Add a Cloud", select k8s, if there is no k8s on this side, then your plugin is not installed successfully, please reinstall it.
Then configure the contents, as long as the configuration of these two places can be. For the URL, my side is to use Kube-dns to do service discovery, do not need the actual IP address for input. This is done. But also have to configure the "voucher", that is, SSH key, convenient to pull the code from Gitlab there, have played Gitlab should know, pull the replacement code into git and HTTP.
8. Add voucher
Here I have added two vouchers, one is SSH, and k8s's credentials. This can be added by yourself. The key in Root is the private key and the public key is configured on the server to Gitlab.
Here is the gitlab above to configure SSH, SSH private key and public key, directly on the node server generated can be Ssh-keygen, and then the contents of the content can be copied.
vi. Jenkins+gitlab Hooks
A task has been created above. Then we start to configure the contents of this task and hook it up with the Gitlab
1, enter the TestProject inside the configuration. Follow the configuration to finish. then click Save. This completes the task configuration and the next step is to configure the Gitlab.
2, Configuration Gitlab
In Gitlab I created a project TestProject
Then go to the project and click on "Settings" = "integrations
Copy the above URL and token to this side, then click Save. The next step is to test whether this configuration can be used
By testing the Webhook created above, Gitlab will simulate the code push event, and returning 200 means success.
Success.
Finally, we have the ASP. NET Core Project
GitHub Address: There is complete code github.com/louieguo/testproject, remember fork me yo. Thanks for your thanks.
Here I created an ASP. NET Core Webapi project without making any changes. These deployment files are then added to the project.
The deploy file contains Jenkinsfile and deploy.yml (used to deploy the project image)
Dockerfile: Medium is used to build the project image
Dockerfile content, of course, inside the SDK I have packed, has been uploaded to my Harbor warehouse.
So far we can start uploading the code to Gitlab and then automatically trigger the build.
After uploading, there will be a build on this side, we can view the console output
Output, and build success
View on Master
Discover that my project has been run, and use the browser to access the following.
Take a look at the externally published ports
The operation was successful. This article has been written for a long time, there may be missed steps, welcome to leave a message, later supplement.
Github:fork Me
asp:787464275 Welcome Dabigatran AC
If you think this article is good or something, you can click on the "Recommend" button in the bottom right corner to support the spirit, because this support is the biggest motivation for me to continue to write and share!
Louieguo
Disclaimer: Original Blog Please keep the original link or at the beginning of the article with my blog address, such as found errors, welcome criticism. General in my article, can not set a reward function, if there are special needs please contact me!