TensorFlow Serving,gpu
TensorFlow serving is an open source tool that is designed to deploy a trained model for inference.
TensorFlow serving GitHub Address
This paper mainly introduces the installation of TensorFlow serving and supports the GPU model. Install dependent Bazel
TensorFlow serving requires 0.4.5 above Bazel. Bazel Installation instructions here to download the installation script here. Taking bazel-0.4.5-installer-linux-x86_64.sh as an example,
chmod +x bazel-0.4.5-installer-linux-x86_64.sh
./bazel-0.4.5-installer-linux-x86_64.sh--user
List Contents
Add environment variables to ~/.BASHRC
Export path= "$PATH: $HOME/bin"
Grpc
It is convenient to install with PIP.
sudo pip install Grpcio
Packages
Other dependent packages required
sudo apt-get update && sudo apt-get install-y \
build-essential \
curl \
libcurl3-dev \
git \
libfreetype6-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
python-dev \
python-numpy
\ PYTHON-PIP \
software-properties-common \
swig \
zip \
Zlib1g-dev
SOURCE Installation
git clone--recurse-submodules https://github.com/tensorflow/serving
CD serving
The official website example does not use the GPU, so when deploying its own model, the speed will be slow (InceptionV3 may be 10+s). Supports the configuration of the GPU version. compile_tensorflow_serving.sh
Please make the appropriate substitutions according to your own GPU and file path.
The following command is executed, and the server is compiled.
./compile_tensorflow_serving.sh
If the compilation succeeds, model server executes in the following path.
Bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server
If external/nccl_archive/src/nccl.h:no such file or directory error, additional installation is required NCCL
git clone https://github.com/NVIDIA/nccl.git
cd nccl/make
cuda_home=/usr/local/cuda
sudo make install
sudo mkdir-p/usr/local/include/external/nccl_archive/src
sudo ln-s/usr/local/include/nccl.h/usr/local/ Include/external/nccl_archive/src/nccl.h
Deploying the Mnist Model
Train a simple model and then save it. Details can refer to basic serving Tutorial
Rm-rf/tmp/mnist_model
Bazel build//tensorflow_serving/example:mnist_saved_model
Bazel-bin/tensorflow_ Serving/example/mnist_saved_model/tmp/mnist_model
Training Model ...
...
Done training!
Exporting trained model To/tmp/mnist_model done
exporting!
Start TensorFlow model Server,
Bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server--port=9000--model_name=mnist--model_base_ path=/tmp/mnist_model/
When you use Nvidia-smi to view the video memory status, you can see if the server is using the GPU properly.
Test the effect of mnist
Bazel build//tensorflow_serving/example:mnist_client bazel-bin/tensorflow_serving/example/mnist_client--num _tests=1000--server=localhost:9000