Machine learning and Docker containers

Source: Internet
Author: User
Tags artificial intelligence machine learning container docker training data

Machine learning (ML) and artificial intelligence (AI) are now hot topics in the IT industry. Similarly, containers have become one of the hot topics. We introduce both machine learning and containers into the image, and experiment to verify that they will work together to accomplish the classification task. We will use Tenserflow and Kontena to elaborate.

Research objectives

Setting goals in an experiment makes the experiment more specific. Here, I set the following goals:

1. Understand machine learning and TensorFlow;

2. Verify that there is synergy between machine learning and the container;

3. Deploy a running machine learning solution on Kontena.

My final vision is as follows, divided into three parts:

 1. There is a simple API: users can classify JPG images;

 2. Run a machine learning model on multiple instances to scale as needed;

 3. Follow the microservice architecture model.

For complete code acquisition please click here.

Introduction to TensorFlow

TensorFlow is an open source software library that uses data flow graphs for numerical calculations. The nodes in the figure represent mathematical operations, and the edges represent the multidimensional data matrix (tensor) flowing between nodes. This flexible architecture allows you to deploy your compute to one or more CPUs or GPUs on your computer's desktop, server, or mobile device without rewriting your code.

Very simple, use TensorFlow to train a computer model with a set of training data. Once the model is trained, we can use the model to analyze unknown data, such as the image classification we are talking about here. In general, the model predicts how well the input data matches certain "known" patterns in the training model.

Here, we won't delve into how to train the model, because it requires a deeper understanding of the concept of machine learning and an in-depth understanding of the TensorFlow system. For more information, check out TensorFlow's model training tutorial and see how HBO Silicon Valley develops a hotdog or not-dog app that recognizes whether an object is a hot dog.

The biggest advantage of the TensorFlow model is that once the model is built, it can be easily used without any cumbersome back-end servers. Just like hotdog or not-dog, the model itself is "running" on mobile devices.

TensorFlow model and container

One of the goals of the experiment was to find out if there was synergy between machine learning and the container. It turns out that there is synergy between the two.

TensorFlow allows you to export a pre-trained model that will be used elsewhere, allowing you to even use the ML model on a mobile device to see if the image contains a hotdog, which also makes the container truly called transfer and run machine learning. A great tool for models.

A seemingly nice way to use containers is to use Docker's new multi-stage builds.

Step 1: The model-builder downloads a pre-trained checkpoint model file and then outputs the model for use by the TensorFlow Serving system.

Step 2: Copy the prepared model data from step 1 to the image for use by TensorFlow Serving. So, the final output is a Docker image that contains everything pre-packaged. Therefore, we can use a docker run... command to serve our machine learning model. If this is not a good collaborative strategy, then it is nothing. From a machine learning novice perspective, running machine learning with a single command is a great way.

Here, I use the off-the-shelf base image to save on the complicated work of installing the TensorFlow package. The download chain of resources is:

        Https://github.com/bitnami/bitnami-docker-tensorflow-serving

        Https://github.com/bitnami/bitnami-docker-tensorflow-inception

API

The TensorFlow Serving system uses the grpc API. Due to the complexity of general machine learning, the API is also relatively complex. At least it is not suitable for any random client program to easily classify a jpg image. Using the grpc API means compiling the protobuf IDL’s and issuing complex requests. So, I think this solution does require a more appropriate API, and people might be able to send an image through the web page and get the classification results.

As mentioned above, I have set a new goal and learn a little Go language. The Go language uses the API to enter the target list, so it's fairly straightforward to write an API that accepts jpg images and call the TensorFlow Serving grpc API for classification. Then, theory and practice are two different things. The API itself is actually very easy to get up and running, and it's only difficult to use the generated code of the grpc protocol Buffer. It seems that there are some problems when dealing with multiple packages in the protocol to Go language conversion. Since I am completely new to the Go language, I ended up using a search-and-replace to "fix" some of the packages imported into the generated code.

Therefore, the API only needs to convert a jpg file into a grpc request in TensorFlow Serving and then return the given classification result in JSON.

Run model and API

Once everything is in the container image, deploying it on any container business process system is a simple matter. Therefore, I intend to use Kontena as a target for deployment.

The most complicated part of the solution is the machine learning model. But now, even when running as a standalone container, things get very simple:

Here, I omitted the configuration of loadbalancer, check the GitHub library for a more detailed deployment.

Test

Using a simplified API before the TensorFlow model, image classification can be easily tested using ordinary curl:

The higher the value of the score, the higher the accuracy of the classification. The experimental results show that our machine learning model can clearly recognize that this photo is a panda. As shown below:

So what is the result of processing this hot dog image?

The test results look pretty good, and our machine learning model is also highly accurate for hot dog recognition.

Sum up

The container-based TensorFlow model does provide a better deployment method. Experiments have shown that with the above architectural pattern, we can easily set up an extensible solution for the TensorFLow model. However, using a model with any client software obviously requires some kind of API encapsulation, which can reduce the complexity of the client processing TensorFLow grpc.

In many cases, it is certainly not practical to use a pre-created model. As with any learning, this is a process that requires feedback, amplifying learning and producing more accurate results. Currently, I am considering extending my approach by building a constant model trainer that can feed back results. The user can select which class of the instance is correct in some web UI, or which is the newly released class, which will continue to provide some information to the build model. You can also export the model periodically to trigger a new model build for the model container.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.