Linux can host. NET Core through Docker!

Source: Internet
Author: User
Tags aliases connection reset nginx server nginx reverse proxy docker ps docker hub docker compose docker swarm

Guide This article is based on my previous article, getting started with. NET Core. First, I upgraded the RESTful API from. NET core RC1 to. NET Core 1.0, and then I added support for Docker and described how to host it in a Linux production environment.

I was on the first contact with Docker and was a long way from being a Linux guru. Therefore, many of the ideas here are from a novice.

content

Install the. NET core on your PC as described on Https://www.microsoft.com/net/core. This will also install the dotnet command-line tool along with the latest Visual Studio tools on Windows.

Source Code

You can find the latest complete source code directly on GitHub.

convert to. NET CORE 1.0

Naturally, when I thought about how to upgrade the API from. NET core RC1 to. NET Core 1.0, the first place to ask for help was Google search. I am following these two very comprehensive guidance to upgrade:

    • Migrating from DNX to the. NET Core CLI
    • Migrating from ASP. 5 RC1 to ASP. NET Core 1.0

When you migrated the code, I suggested reading these two guides carefully, because I was confused and frustrated when I tried to browse the second article without reading the first guide.

I don't want to describe the changes in detail because you can look at the commits on GitHub. Here is a summary of the changes I have made:

    • Update the version number on Global.json and Project.json
    • Delete obsolete chapters on Project.json
    • Use a lightweight controllerbase instead of a Controller, because I don't need a method associated with the MVC view (this is an optional change).
    • Remove the Http prefix from the helper method, for example: Httpnotfound-NotFound
    • Logverbose-Logtrace
    • Namespace change: microsoft.aspnetcore.*
    • Use Setbasepath in startup (without it Appsettings.json will not be found)
    • Run through Webhostbuilder instead of Webapplication.run
    • Delete Serilog (when writing an article, it does not support. NET Core 1.0)

The only thing that really hurts me is the need to move serilog. I could have implemented my own file recorder, but I deleted the file logging feature because I didn't want to spend my energy on it for this operation.

Unfortunately, there will be a large number of third-party developers playing the role of catching up with. NET Core 1.0, and I'm very sympathetic to them, because they usually keep working during the rest of the day but still can't get close to Microsoft's available resources at all. I recommend reading Travis Illig's article. NET Core 1.0 released, but where is AUTOFAC? This is an article on the views of third-party developers.

With these changes, I can restore, build, and run dotnet from the Project.json directory, and I can see that the API works as before.

run through Docker

When I wrote this article, Docker was only able to work on Linux systems. There are beta support Docker on Windows systems and OS X, but they all have to rely on virtualization technology, so I chose Ubuntu 14.04 as a virtual machine to run. If you haven't installed Docker yet, follow the instructions to install it.

I've read a few things about Docker recently, but I haven't really used it to do anything until now. I assume that the reader has no knowledge of Docker, so I'll explain all the commands I use.

HELLO DOCKER

After installing Docker on Ubuntu, my next step is to start running. NET core and Docker as described on Https://www.microsoft.com/net/core#docker.

Start by starting a container with. NET Core installed.

Docker run-it Microsoft/dotnet:latest

The-it option represents interaction, so after you execute this command, you are inside the container and can execute any bash command as you wish.

Then we can execute the following five commands to run the Microsoft. NET Core Console Application sample within Docker.

mkdir hwappcd hwappdotnet newdotnet restoredotnet Run

You can leave the container by running exit and then run the Docker ps-a command, which displays the container you created that has exited. You can clear the container by running the command on the Docker RM <container_name>.

Mount Source code

My next step is to use the same microsoft/dotnet image as above, but it will mount the source code for our application in the form of a data volume.

First check out the repository with the relevant commits:

git clone https://github.com/niksoper/aspnet5-books.gitcd aspnet5-books/src/mvclibrarygit checkout dotnet-core-1.0

Now start a container to run. NET Core 1.0 and place the source code under/book. Note Change this section of the/path/to/repo file to match your computer:

Docker run-it/-v/path/to/repo/aspnet5-books/src/mvclibrary:/books/microsoft/dotnet:latest

Now you can run the application in the container!

Cd/booksdotnet Restoredotnet Run

As a conceptual presentation it's really great, but we don't want to think about how to install the source code into a container every time we run a program.

Add a DOCKERFILE

My next step is to introduce a Dockerfile, which makes it easy for applications to launch within their own containers.

My Dockerfile, like Project.json, is located in the Src/mvclibrary directory and looks like this:

From Microsoft/dotnet:latest
# Create directory for the application source code run mkdir-p/usr/src/booksworkdir/usr/src/books# copy the source code and restore the dependency copy. /usr/src/booksrun dotnet restore exposes the port and runs the application

EXPOSE 5000
CMD ["Dotnet", "Run"]

Strictly speaking, the RUN mkdir-p/usr/src/books command is not required because COPY automatically creates the missing directory.

The Docker image is built on a layer, and we start with a mirror that contains. NET Core, add another layer that generates the application from the source code, and then runs the application.

After adding Dockerfile, I generate a mirror by running the following command, and start a container with the generated image (make sure to operate in the same directory as Dockerfile, and you should use your own username).

Docker build-t Niksoper/netcore-books. Docker run-it Niksoper/netcore-books

You should see that the program works the same way as before, but this time we don't need to install the source code as before, because the source code is already included in the Docker image.

exposing and publishing ports

This API is not particularly useful unless we need to communicate with it from outside the container. Docker already has the concept of exposing and publishing ports, but this is two completely different things.

According to the official Docker documentation:
The expose instruction notifies the Docker container to listen for specific network ports at run time. The expose instruction does not allow the port of the container to be accessed by the host. To make accessible, you must publish a port range via the-p flag or use the-P flag to publish all exposed ports
The EXPOSE directive simply adds metadata to the mirror, so you can say in the document that it is a mirrored consumer. Technically, I should have ignored EXPOSE 5000 because I knew the ports the API was listening to, but left them very useful and recommendable.

At this stage, I want to access this API directly from the host, so I need to publish this port via the-p directive, which will allow the request to be forwarded from Port 5000 on the host to port 5000 on the container, regardless of whether the port was previously exposed through Dockerfile.

Docker run-d-P 5000:5000 niksoper/netcore-books

The-D instruction tells Docker to run the container in detach mode, so we can't see its output, but it still runs and listens on port 5000. You can use Docker PS to confirm the issue.

So, next I'm going to start a request from the host to the container to celebrate:

Curl Http://localhost:5000/api/books

It does not work.

Repeating the same curl request, I saw two errors: either Curl: (RECV) failure:connection reset by peer, or curl: () Empty reply from server.

I return to the documentation for the Docker run and check again the-P option I am using and the expose instructions in the Dockerfile are correct. I didn't find any problems, which made me start to get a little frustrated.

Once again, I decided to consult with a local Scott Logic DevOps guru, Dave Wybourn (also mentioned in this Docker Swarm article), and his team encountered this practical problem. The problem is I haven't configured Kestral, which is a new lightweight, cross-platform Web server for. NET Core.

By default, Kestrel listens for http://localhost:5000. But the problem is that localhost here is a loop interface.

According to Wikipedia:
In a computer network, localhost is a host name that represents the native computer. The local host can access network services running on the host through the network Loop interface. Any hardware network interface can be bypassed by using the loop interface.
This is a problem when running inside a container, because localhost can only be accessed within the container. The workaround is to update the Main method in Startup.cs to configure the URL of the Kestral listener:

public static void Main (string[] args) {  var host = new Webhostbuilder ()    . Usekestrel ()    . Usecontentroot (Directory.GetCurrentDirectory ())    . Useurls ("http://*:5000")//listen for ports on all network interfaces    . Useiisintegration ()    . Usestartup<startup> ()    . Build ();  Host. Run ();}

With these additional configurations, I can recreate the image and run the application in the container, which will be able to receive requests from the host:

Docker build-t Niksoper/netcore-books. Docker run-d-P 5000:5000 niksoper/netcore-bookscurl-i Http://localhost:5000/ap I/books

I now get the following corresponding:

http/1.1 Okdate:tue, 15:25:43 Gmttransfer-encoding:chunkedcontent-type:application/json; charset=utf-8server:kestrel[{"id": "1", "title": "RESTful API with ASP. NET Core MVC 1.0", "Author": "Nick Soper"}]
running KESTREL in the product environment

Introduction of Microsoft:
Kestrel can handle dynamic content from ASP. However, the features of the Network Services Section are not as good as the full-feature server such as Iis,apache or Nginx. The reverse proxy server allows you to work from an HTTP server like handling static content, caching requests, compression requests, SSL endpoints, and so on.
So I need to set Nginx as a reverse proxy server on my Linux machine. Microsoft describes how to publish to a Linux production environment in a guided tour. I'll summarize the instructions here:

    1. Generate a self-contained package for the application by dotnet Publish.
    2. Copy the published application to the server
    3. Installing and configuring Nginx (as a reverse proxy server)
    4. Install and configure the supervisor (to ensure that the Nginx server is in a running state)
    5. Install and configure AppArmor (to limit the use of resources for your app)
    6. Configuring the server firewall
    7. Security hardened Nginx (build and configure SSL from source code)

This is beyond the scope of this article, so I will focus on how to configure Nginx as a reverse proxy server. Naturally, I do this through Docker.

run Nginx in another container

My goal is to run Nginx in the second Docker container and configure it as the reverse proxy server for our application container.

I'm using the official Nginx image from the Docker Hub. First I try to do this:

Docker run-d-P 8080:80--name web Nginx

This initiates a container running Nginx and maps the 8080 port on the host to the 80 port on the container. Now open the URL in the browser http://localhost:8080 will show the default login page Nginx.

Now that we have confirmed how simple it is to run Nginx, we can close this container.

Docker Rm-f Web
Configure NGINX as a reverse proxy server

You can configure Nginx as a reverse proxy server by editing the configuration file located in/etc/nginx/conf.d/default.conf as follows:

server {  listen;  Location/{    proxy_pass http://localhost:6666;  }}

The above configuration allows Nginx to proxy all access requests to the root directory to http://localhost:6666. Remember that the localhost here refers to the container that is running Nginx. We can use our own configuration files within the Nginx container using the volume:

Docker run-d-P 8080:80/-v/path/to/my.conf:/etc/nginx/conf.d/default.conf/nginx

Note: This maps a single file from the host to the container instead of a full directory.

Communicating between Containers

Docker allows internal containers to communicate through shared virtual networks. By default, all containers launched through the Docker daemon can access a virtual network called a "bridge." This allows a container to be referenced by an IP address and port on the same network by another container.

You can find its IP address by monitoring the inspect container. I'll start a container from the niksoper/netcore-books image I created earlier and monitor inspect it:

Docker run-d-P 5000:5000--name Books niksoper/netcore-booksdocker Inspect books

We can see that the IP address of this container is "IPAddress": "172.17.0.3".

So now if I create the following nginx configuration file and use this file to start an nginx container, it will request the proxy to my API:

server {  listen;  Location/{    proxy_pass http://172.17.0.3:5000;  }}

Now I can use this configuration file to start an nginx container (note that I mapped the 8080 port on the host to the 80 port on the Nginx container):

Docker run-d-P 8080:80/-v ~/dev/nginx/my.nginx.conf:/etc/nginx/conf.d/default.conf/nginx

A request to http://localhost:8080 will be proxied to the application. Note the Server response header for the following Curl response:

DOCKER COMPOSE

In this place, I am happy for my progress, but I think there must be a better way to configure Nginx, you do not need to know the exact IP address of the application container. Another local Scott Logic DevOps guru, Jason Ebbin, has made improvements in this area and recommended the use of Docker Compose.

Profile description, Docker Compose makes it easy to start a set of containers that are connected to each other through declarative syntax. I don't want to dwell on how Docker Compose works because you can find it in previous articles.

I'll start with a docker-compose.yml file I'm using:

Version: ' 2 ' services:    books-service:        container_name:books-api        build:.    Reverse-proxy:        container_name:reverse-proxy        Image:nginx        ports:         -"9090:8080"        volumes:         - ./proxy.conf:/etc/nginx/conf.d/default.conf

This is version 2 syntax, so you need at least 1.6 version of Docker Compose in order to work properly.

This file tells Docker to create two services: one for the application and the other for the Nginx reverse proxy server.

Books-service


This container built with Dockerfile in the same directory as DOCKER-COMPOSE.YML is called Books-api. Note that this container does not need to publish any ports, as long as it can be accessed from the reverse proxy server without the need to access it from the host operating system.

Reverse-proxy

This will start a container called reverse-proxy based on the nginx image and mount the proxy.conf file in the current directory as a configuration. It maps the 9090 ports on the host to the 8080 ports in the container, which allows us to access the container through the host on the http://localhost:9090.

The proxy.conf file looks like this:

server {    listen 8080;    Location/{      proxy_pass http://books-service:5000;    }}

The key point here is that we can now refer to Books-service by name, so we don't need to know the IP address of BOOKS-API this container!

Now we can start two containers with a running reverse proxy (-D means this is independent, so we can't see the output from the container):

Docker Compose up-d

Verify the container that we created:

Docker PS

Finally, verify that we can control the API through a reverse proxy:

Curl-i Http://localhost:9090/api/books
How do you do that?

Docker Compose does this by creating a new virtual network called Mvclibrary_default, which is used for both BOOKS-API and Reverse-proxy containers (names are based on The parent directory of the Docker-compose.yml file).

Verify that the network already exists by using Docker networks LS:

You can use the Docker network inspect Mvclibrary_default to see the details of the new Web:

Note that Docker has assigned subnets to the network: "Subnet": "172.18.0.0/16". The/16 section is a non-class intra-domain routing (CIDR), and the full explanation is beyond the scope of this article, but CIDR simply represents the IP address range. Running Docker network inspect bridge shows subnets: "Subnet": "172.17.0.0/16", so the two networks do not overlap.

Now use Docker inspect BOOKS-API to confirm that the application's container is using the network:

Note The container's two aliases ("Aliases") are the container identifier (3c42db680459) and the service name (Books-service) given by Docker-compose.yml. We refer to the application's container through the Books-service alias in the custom Nginx configuration file. This could have been created manually through Docker network create, but I like to use Docker Compose because it can cleanly and neatly bundle container creation and dependencies.

Conclusion

So now I can run the application with Nginx on a Linux system in a few simple steps, without any long-term changes to the host operating system:

git clone HTTPS://GITHUB.COM/NIKSOPER/ASPNET5-BOOKS.GITCD aspnet5-books/src/mvclibrarygit Checkout Blog-dockerdocker-compose up-dcurl-i Http://localhost:9090/api/books

I know what I wrote in this article is not a real production-ready device, because I did not write anything about the following, most of the following topics need to be described in a separate complete article.

    • Security considerations such as firewalls and SSL configurations
    • How to ensure that applications remain running
    • How to choose which Docker image to include (I put everything in the Dockerfile)
    • Databases-How to manage them in a container

It was a very interesting learning experience for me because for a while I was curious to explore the cross-platform support of ASP. Using the "Configuratin as Code" Docker Compose method to explore the world of DevOps is also very enjoyable and and very instructive.

If you're curious about Docker, then I encourage you to try it out and maybe it'll get you out of the comfort zone, but maybe you'll like it?

This article was reproduced from: http://www.linuxprobe.com/docker-linux-core.html

Free to provide the latest Linux technology tutorials Books, for open-source technology enthusiasts to do more and better: http://www.linuxprobe.com/

Linux can host. NET Core through Docker!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.