Announcing TensorFlow Lite__tensorflow

Source: Internet
Author: User
Today, we ' re happy to announce the developer preview of TensorFlow Lite, TensorFlow ' s lightweight solution for mobile and Embedded devices! TensorFlow has always run on many platforms, from racks of servers to tiny IoT, devices as the but of adoption Lea Rning models has grown exponentially over the "last few years" so has "need to deploy" on mobile and them embedded S. TensorFlow Lite enables low-latency inference to on-device machine learning.

It is designed from scratch to be:
LightweightEnables inference of On-device machine learning models with a small binary size and fast Initialization/startup Cross-platformA runtime designed to run in many different platforms, starting with Android and IOS FastOptimized for mobile devices, including dramatically improved model loading times, and supporting hardware acceleration Mo Re and more mobile devices today incorporate purpose-built custom hardware to process ML workloads more efficiently. TensorFlow Lite supports the Android neural Networks API to take advantage of this new accelerators as they come Able.
TensorFlow Lite falls back to optimized CPUs execution when accelerator hardware are not available, which ensures your model S can still run fast on a large set of devices.
ArchitectureThe following diagram shows the architectural design of TensorFlow Lite:
The individual components are:
TensorFlow Model: A trained TensorFlow model saved on disk. TensorFlow Lite Converter: A program so converts the model to the TensorFlow Lite file format. tensorflow Lite Model File: A model file format based on Flatbuffers, which is has been optimized for maximum speed and minimum size. The TensorFlow Lite Model File is then deployed within a Mobile App, where:
Java API:A Convenience wrapper around the C + + API on Android C + + API: Loads the TensorFlow Lite Model File and invokes the interpreter. The same library is available on both Android and IOS Interpreter: Executes the model using a set of operators. The interpreter supports selective operator loading; Without operators it is only 70KB, and 300KB with the operators loaded. This is a significant reduction to the 1.5M required by TensorFlow Mobile (with a normal set of operators). On select Android devices, the interpreter would use the Android Neural Networks APIFor hardware acceleration, or default to CPU execution if none are available. Developers can also implement custom kernels using the C + + API, that can is used by the interpreter.
ModelsTensorFlow Lite already has support for a number of models that have the been trained and optimized for mobile:
Mobilenet:a Class of vision models able to identify across 1000 different object classes, specifically designed for effic Ient execution on mobile and embedded devices Inception V3:an image recognition model, similar into functionality to mobile Net, that's offers higher accuracy but also has a larger size Smart Reply:an On-device conversational model that provides O Ne-touch replies to incoming conversational chat messages. First-party and Third-party messaging apps Use this feature on Android Wear. Inception v3 and Mobilenets have been on the trained dataset. You can easily retrain your own image datasets through transfer learning.
What about TensorFlow Mobile?As your may know, TensorFlow already supports mobile and embedded deployment of models the through mobile API . Going forward, TensorFlow Lite should be seen as the evolution of TensorFlow Mobile, and as it matures it would become the Recommended solution for deploying models on mobile and embedded devices. With this announcement, TensorFlow Lite is made available as a developer preview, and TensorFlow Mobile are still there to Support production apps.
The scope of TensorFlow Lite is large and still under active development. With this developer preview, we have intentionally started with a constrained platform to ensure performance on some of th E most important common models. We to prioritize future functional expansion based on the needs of our users. The goals for we continued development are to simplify the developer experience, and enable model deployment for a range of mobile and embedded devices.
We are excited that developers are getting their hands on TensorFlow Lite. We Support and address our external community with the same intensity as the rest of the TensorFlow project. We can ' t wait for to-what you can does with TensorFlow Lite.
For more information, check out the TensorFlow Lite documentation pages.
Stay tuned for more updates.
Happy TensorFlow Lite coding!
Original address: https://developers.googleblog.com/2017/11/announcing-tensorflow-lite.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.