Deep Learning Application Series (iii) | Build your own image recognition app using Tflite Android

Source: Internet
Author: User
Tags mul git clone

Deep learning to practice, an indispensable path is to the intelligent terminal, embedded equipment and other directions. But the terminal device does not have the powerful performance of GPU server, how to make the end device application deep learning?

Fortunately, Google has launched the tfmobile, last year further, the introduction of Tflite, its application of the idea of using migration learning to train their own model on the GPU server, and then porting the customized model to Tflite,

The terminal equipment only uses the model to make forward inference and predicts the result. This article is based on the following three articles:

    • Theory article: Www.tensorflow.org/hub/tutorials/image_retraining#other_architectures
    • Practice Article One: codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html#0
    • Practice Chapter Two: codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/#0
    • Google's pre-compiled model: GITHUB.COM/TENSORFLOW/TENSORFLOW/BLOB/MASTER/TENSORFLOW/CONTRIB/LITE/G3DOC/MODELS.MD
    • Toco Official website: Github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/toco

You can also easily customize your image recognition application after you have mastered it.

The first step. Preparing data

Data are: http://download.tensorflow.org/example_images/flower_photos.tgz

This is a collection of pictures about the flower classification, after downloading the decompression, you can see that there are 5 varieties of categories: Daisy (Daisy), Dandelion (dandelion), Rose (Rose), sunflower (sunflower), Tulip (tulip).

Our goal is to get a model of the pattern recognition by re-training the precompiled model.

The second step. Re-training 

1. Pick Precompiled model

From the "pre-compiled model provided by Google" list, we can generally see that there are two types of models, one is the float Models (floating point model), one is the quantized Models (quantitative model), what is the difference?

In fact, the float models is represented as a high-precision model, which means that the model size is larger, the recognition accuracy is higher, the recognition time is longer, is suitable for high-performance terminal equipment, and the quantized models is the low-precision model, whose precision takes a fixed 8-bit size, Therefore, the model size is small, the recognition accuracy is low, the recognition time is short, suitable for low performance terminal equipment, more detailed description can be see www.tensorflow.org/performance/quantization.

Our mobile devices are rapidly upgrading and can generally be used with the float Models. In this model, there are a lot of pre-compiled models to choose, for this article, the main focus is inception and mobilenet two architectures.

Note Mobilenet is actually divided into many kinds, such as mobilenet_v1_0.50_224, where the third parameter is the model size scale value (only approximate, not accurate), divided into 0.25/0.50/0.75/1.0 four scale values, the fourth parameter is the picture size, It has a value of 128/160/192/224 four values.

Interested in looking at each model hierarchy can be viewed in the following code:

import tensorflow as tf
import tensorflow.gfile as gfile

MODEL_PATH = '/home/yourname/Documents/mobilenet_v1_1.0_224/frozen_graph.pb'

def main(unusedArgv):
    with tf.Graph().as_default() as graph:
        with gfile.FastGFile(MODEL_PATH, 'rb') as f:
            graph_def = tf.GraphDef()
            graph_def.ParseFromString(f.read())
            tf.import_graph_def(graph_def, name='')
    for op in graph.get_operations():
        for tensor in op.values():
            print(tensor)

if __name__ == '__main__':
    tf.app.run()

Considering the performance of the test phone is not bad, we chose mobilenet_v1_1.0_224 this version as our pre-compiled model.

2. Download the Training Code

You need to download the training model code and the Android-related code as follows:


git clone https://github.com/googlecodelabs/tensorflow-for-poets-2
cd tensorflow-for-poets-2


Among them, the scripts directory of retrain.py is our concern, this code currently only supports INCEPTION_V3 and mobilenet two pre-compilation models, the default training model is INCEPTION_V3.

3. Retrain the Model

The training commands of the two models are different, if you go to the default Inception_v3 model, you can use the following command:

python -m scripts.retrain \
  --learning_rate=0.01 \
  --bottleneck_dir=tf_files/bottlenecks \
  --how_many_training_steps=4000 \
  --model_dir=tf_files/models/ \
  --output_graph=tf_files/retrained_graph.pb \
  --output_labels=tf_files/retrained_labels.txt \
  --image_dir=tf_files/flower_photos \

If you go to the Mobilenet model, you can use the following command:

python -m scripts.retrain \
 --learning_rate=0.01 \  
  --bottleneck_dir=tf_files/bottlenecks \
  --how_many_training_steps=4000 \
  --model_dir=tf_files/models/ \
  --output_graph=tf_files/retrained_graph.pb \
  --output_labels=tf_files/retrained_labels.txt \
  --image_dir=tf_files/flower_photos \
  --architecture=mobilenet_1.0_224

The model commands are interpreted as follows:

--architecture is a schema type that supports Mobilenet and inception_v3 two types of
--image_dir is the data address, assuming you set up the Tflite directory under the Tensorflow-for-poets-2 directory, put the flower picture set into it
--output_labels the last training to generate the label of the model, because the flower picture collection has been categorized according to subdirectories, so Retrained_labels.txt finally contains the above five categories of flowers category name
--output_graph the last training-generated model
After the--model_dir command is started, the pre-compilation model's
--how_many_training_steps the number of training steps, default is 4000 if not specified
--bottleneck_dir is used to cache the top layer's training data as a file.
--learning_rate Learning Rate
In addition, there are some parameters that can be adjusted as needed:
--testing_percentage divide the picture by how much as the test data, the default is 10
--validation_percentage divide the picture by how much as the validation data, the default is 10, when the two values are set, the ratio of training data to 80%
--eval_step_interval How many steps to perform an evaluation, the default is 10
--train_batch_size number of pictures in a single session, default is 100
--validation_batch_size number of images validated at once, default is 100
--random_scale given a scale value, then randomly expands the size of the training picture by default to 0
--random_brightness given a scale value, then randomly increases or decreases the brightness of the training picture by default to 0
--random_crop given a scale value, then randomly cropping the edge value of the training picture, which defaults to 0

4. Test the effectiveness of training

We use mobilenet_1.0_224 to train, and then find a picture to see if we can recognize it correctly:

python -m scripts.label_image \
  --graph=tf_files/retrained_graph.pb  \
  --image=tf_files/flower_photos/daisy/3475870145_685a19116d.jpg

The result is:

Evaluation time (1-image): 1.010s

daisy (score=0.62305)
tulips (score=0.22490)
dandelion (score=0.14169)
roses (score=0.00966)
sunflowers (score=0.00071)

or accurately identify the daisy out of the way.

5. Convert model format

PB format is not able to run on the Tflite, Tflite absorbed Google's Protobuffer advantages, created the Flatbuffer format, the specific performance is the suffix. tflite file.

The above Toco's official website has described how to convert the PB format to a tflite file via the command line, or the format can be converted in code. Not only supports the PB format, but also supports the conversion of the HDF5 file format into Tflite, which enables the sharing of models with other frameworks.

How does that turn out? This example is converted by command line mode. If the training model is INCEPTION_V3, the command line is as follows:

toco \
  --graph_def_file=tf_files/retrained_graph.pb \
  --output_file=tf_files/optimized_graph.lite \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --input_shape=1,299,299,3 \
  --input_array=Mul \
  --output_array=final_result \
  --inference_type=FLOAT \
  --input_data_type=FLOAT

If the training model is mobilenet, the command line mode is as follows:

toco \
  --graph_def_file=tf_files/retrained_graph.pb \
  --output_file=tf_files/optimized_graph.lite \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --input_shape=1,224,224,3 \
  --input_array=input \
  --output_array=final_result \
  --inference_type=FLOAT \
  --input_data_type=FLOAT

Some points need to be explained:

The--input_array parameter represents the portal tensor op name for the model diagram structure, and the entry name for the mobilenet is input,inception_v3, the entry name is Mul, why? You can view the contents of the scripts/retrain.py code:

if architecture == 'inception_v3':
    # pylint: disable=line-too-long
    data_url = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'
    # pylint: enable=line-too-long
    bottleneck_tensor_name = 'pool_3/_reshape:0'
    bottleneck_tensor_size = 2048
    input_width = 299
    input_height = 299
    input_depth = 3
    resized_input_tensor_name = 'Mul:0'
    model_file_name = 'classify_image_graph_def.pb'
    input_mean = 128
    input_std = 128
elif architecture.startswith('mobilenet_'):
    ...
    data_url = 'http://download.tensorflow.org/models/mobilenet_v1_'
    data_url += version_string + '_' + size_string + '_frozen.tgz'
    bottleneck_tensor_name = 'MobilenetV1/Predictions/Reshape:0'
    bottleneck_tensor_size = 1001
    input_width = int(size_string)
    input_height = int(size_string)
    input_depth = 3
    resized_input_tensor_name = 'input:0'

The Resized_input_tensor_name is the entry name for the newly generated model, and you can view the newly generated model hierarchy by using the code visualization "1. Pick precompiled model" above. So the name must be correctly written, otherwise running the command throws the exception "Valueerror:invalid tensors ' input ' were found".

The--output_array is the export name of the model. Why is Final_result this name, because in scripts/retrain.py there are:

parser.add_argument(
      '--final_tensor_name',
      type=str,
      default='final_result',
      help="""\
      The name of the output classification layer in the retrained graph.\
      """
  )

That is, the export name defaults to Final_result.

--input_shape Note that the Mobilenet training picture size is 224, while the training image size of Inception_v3 is 299.

Finally Optimized_graph.lite is the model file that we will be porting to Android.

The third step. Android Tflite 

1. Download Android Studio

This step is not the focus of this article, please download and install the latest SDK and NDK on your own developer.android.com/studio/.

2. Introduction of Engineering

Introducing Tensorflow-for-poets-2/android/tflite code from Android Studio, there are four classes, three classes dealing with layouts, and we just need to focus on the Imageclassifier.java class.

3. Importing models

The generated model can be imported into the resource directory of the above project through the command line method:

cp tf_files/optimized_graph.lite android/tflite/app/src/main/assets/mobilenet.lite 
cp tf_files/retrained_labels.txt android/tflite/app/src/main/assets/mobilenet.txt
4. Modify the Imageclassifier.java class

Note that you can modify four places:

/** Name of the model file stored in Assets. */
   Private static final String MODEL_PATH = "mobilenet.lite";

   /** Name of the label file stored in Assets. */
   Private static final String LABEL_PATH = "mobilenet.txt";

   Static final int DIM_IMG_SIZE_X = 224; //If inception, change to 299
   Static final int DIM_IMG_SIZE_Y = 224; // If inception, change to 299
5. Run the viewing effect

Once you've connected your phone, clicking "Run" and "run" will deploy the app to your phone, and any images captured by the camera will be ranked according to the 5 categories in the label.

We can use Baidu search some of these five categories of flowers to identify, to see the correct rate of recognition.

PostScript: According to my test results, in the picture collection of flowers, the new model recognition rate of mobilenet_1.0_244 model is higher, and the recognition rate of the new model generated by the INCEPTION_V3 model is low or not allowed.

It is recommended that the new data set be compared between the two models to find the model that best suits you.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.