Android things feature 6 complete chestnuts: using TensorFlow to parse images

Source: Internet
Author: User

Google Development Technology expert (GDE) Wangyucheng (York Wang)

So much has been said before, as if there is not a general concept, how do we write a complete code?

Now deep learning is very hot, then we are in the Android things, using the camera to capture pictures, let TensorFlow to identify the image, and finally use the speaker to tell us the results.

Isn't it cool? Say the basic function is said so long a string, that base code how long ah?

Project structure

Let's start with the beginning of Android Studio's ring.

After you start Android Studio, be sure to upgrade the SDK Tools version to 24 and above. Then upgrade the SDK to Android 7.0 and above. Let Android Studio do its own update of the related components, import the project, the structure of the project is as follows:

The code of Imageclassifier is used to interact with TensorFlow, as well as the camera, image processing related handler.

Let's take a look at the external reference library:

Includes the android things and TensorFlow related libraries, of course, the Android API version is 24. Gradle's dependence and filer in Manifest are consistent with the previous development environment.
The referenced TensorFlow library is an AAR packaged Tensorflow-android-inference-alpha-debug.aar, which means that we do not need the NDK environment to compile the entire project.

The main focus is on dependencies, which includes the TensorFlow library and the Android thing library:

And then applied for the camera-related permissions. To add, Android things is not supported for dynamic permissions.

Hardware connection

The next step is how the hardware is connected.

The hardware inventory is as follows:
Android Things compatible development boards, such as Raspberry Pi 3
Android things compatible cameras, such as the Raspberry Pi 3 camera Module
Components:
1 buttons, see Bread Board
2 resistors, this piece must be explained: Because the picture is connected to the 5V voltage, generally gpio and led pressure capacity is 3V, some gpio is compatible with 5V, so the middle need to series 100~200 ohms. Of course, for insurance, 3.3V voltage is recommended.
1 x LED Lights
1 Sheets of bread
DuPont Line several
Optional: Speakers or headphones
Optional: HDMI output

After we have finished the hardware, we need to understand the operation flow.

Operation Flow

Follow the previous tutorial, connect ADB with Andorid Studio, configure the Wi-Fi for the board, and load the application onto the board.

The operation flow is as follows:
Restart the device and run the program until the LED light starts flashing;
Point the camera at the cat, dog, or some furniture.
Press the switch to start shooting pictures;
In the Raspberry Pi 3, generally within 1s, you can complete the picture capture, by TensorFlow processing, and then through the TTS release sound. The LED lights are off during operation;
Logcat will print the final result, if there is a display device connection, the picture and the results will be displayed;
If there are speakers or headphones, the results will be broadcast.

Because the structure of the code is particularly simple, pay attention to a few pieces of key operations can be. Presumably graphics, the operation of the camera in the Android programming, everyone will, so do not explain.

Code flow

Mainly to see the initial operation of the LED:

It is necessary to say that Imageclassifieractivity.java is the portal of the application's only Activity. There is already a definition in Manifest, which initializes components such as LEDs, Camera, Tensorfflow, and so on. Among them, the Button we use is BCM32 this pin, the LED used is the BCM6 pin, the related initialization in this Activity has been completed.

This part of the code is the code that captures the key press. When the button is pressed, the camera starts capturing the data.


After turning the camera data into a Bitmap file, we call TensorFlow to process the image.


This function calls the TensorFlow for processing, and finally outputs the results to logcat. If the TTS engine is called in the code, then the result is translated into voice read. It seems that the most important thing is the recognizeimage () interface of the Tensorflowclassifie class. Let's keep looking down.

This is the final step, call TensorFlow for image recognition:
Convert RGB images into data that tensorflow can recognize;
Copy the data into the TensorFlow;
Identify the image and give the result.

Call TensorFlow the process is very fun, but also very convenient. So, why is it that TensorFlow can identify what the picture is all of a sudden? TensorFlow's official website gives the following answers:
Www.tensorflow.org/tutorials/image_recognition

It is necessary to note that TensorFlow's image recognition classification can be submitted to the server identification by the network, or offline data recognition. The identification data of about 200M can be placed locally and then recognized after submission. Now it is possible to distinguish between 1000 categories of images, which 1000 categories? The project code already contains the OH.

The use of TensorFlow to deal with the internet of things data will be particularly simple, not only TensorFlow, Firebase can also be used in Android things. This function, powerful no words!

The project mentioned today comes from a project that Google maintains on GitHub, and the address of the project is
Github.com/androidthings/sample-tensorflow-imageclassifier

Of course, there are many Android things codes on GitHub that you can refer to.

Do you want to write an application? In fact, this project can be played with a little change. For example, add an infrared sensor, and once a creature is nearby, take a picture and identify it immediately.

Open your brain hole.

Postscript

This article is the last article of the topic. After writing the whole project, I found that Android things bring too much convenience to developers, how to burn files? How do I use the SDK? How do you even use Google's other services to do IoT-related data processing? There are too many out-of-the-box options to choose from, and it's amazing how easy it is to use Android things for IOT application development!

If you have any ideas related to Android things, you are welcome to leave a message below, we will give good advice to the Android things product department. Maybe someday, your advice is part of andorid things.

Android things feature 6 complete chestnuts: using TensorFlow to parse images

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.