Core ML machine learning, coreml Machine Learning
At the WWDC 2017 Developer Conference, Apple announced a series of new machine learning APIs for developers, including visual APIs for facial recognition and natural language processing APIs, these APIs integrate Apple's so-called Core ML framework. The Core of Core ML is to accelerate AI tasks on the iPhone, iPad, and Apple Watch, supports deep neural networks, recurrent neural networks, convolutional neural networks, SVM, tree integration, and linear models.
Overview
With Core ML, You can integrate trained machine learning models into your own applications.
Supported Operating Systems: iOS, macOS, tvOS, and watchOS
The so-calledA trained model is a set of results generated after a machine learning algorithm is applied to a set of training data.. For example, if a model is trained based on historical house prices in a certain area, you can predict the room price in the future as long as the specified room has several bedrooms and several guards.
Core ML is the basis of domain-specific frameworks and functions. Core ML provides Vision with support for image processing and Foundation with support for natural language processing (such as NSLinguisticTagger). It provides learning decision tree (learned demo-tree) for GameplayKit) analysis Support. Core ML is based on the underlying basic types, including Accelerate, BNNS, and Metal Performance Shaders.
Core ML optimizes device performance to minimize memory usage and power consumption. Strict requirements on the operation of devices not only protect the privacy of user data, but also ensure the normal operation and response of applications when network connections are lost.
Obtain the Core ML Model
Obtain the Core ML Model for use in your applications.
Core ML supports multiple Machine learning models, including the Neural Network, Tree Ensemble, and Support Vector Machine) and the Generalized Linear Model (Generalized Linear Model ). Core ML runs in the Core ML Model format (that is, the model ending with the. mlmodel extension ).
Apple provides some common open-source models that use the Core ML Model format. You can download these models on your own and then start using them in the application. In addition, many machine learning models and training data have been released by other research institutions and universities, which are often not published in the Core ML Model format. If you want to use these models, you need to convert them. For details, see "Convert trained models to Core ML 」.
Integrate the Core ML model into the application
Add a simple model to the application, input data to the model, and process the predicted value of the model.
Click here to download the sample application.
Overview
In this example, a trained model named MarsHabitatPricer. mlmodel is used to predict the value of the colony on Mars.
Add the model to the Xcode Project
To add a model to the Xcode project, you only need to drag the model into the project navigator.
You can open this model in Xcode to view its information, including the model type and its expected input and output. The input of the model is the number of solar panels and greenhouses, and the size of the colony (in Acre ). The output of the model is the prediction of the value of this colony.
Use code to create a model
Xcode also uses the input and output information of the model to automatically generate a custom programming interface for the model, so that it can interact with the model in the code. For this MarsHabitatPricer. mlmodel, Xcode generates corresponding interfaces to represent the model itself (MarsHabitatPricer), the model input (MarsHabitatPricerInput), and the model output (MarsHabitatPricerOutput ).
You can use the constructor of the generated MarsHabitatPricer class to create this model:
let model = MarsHabitatPricer()
Obtain the input value and pass it to the model.
The sample application uses UIPickerView to obtain the input value of the model from the user.
func selectedRow(for feature: Feature) -> Int {
return pickerView.selectedRow(inComponent: feature.rawValue)
}
let solarPanels = pickerDataSource.value(for: selectedRow(for: .solarPanels), feature: .solarPanels)
let greenhouses = pickerDataSource.value(for: selectedRow(for: .greenhouses), feature: .greenhouses)
let size = pickerDataSource.value(for: selectedRow(for: .size), feature: .size)
Use Models for Prediction
The MarsHabitatPricer class will generate a method named prediction (solarPanels: greenhouses: size :) to predict the value based on the input value of the model. In this example, the input value is the number of solar panels, the number of greenhouses, and the size of the colony (in Acre ). The result of this method is a MarsHabitatPricerOutput instance. Here we name it marsHabitatPricerOutput.
guard let marsHabitatPricerOutput = try? model.prediction(solarPanels: solarPanels, greenhouses: greenhouses, size: size) else {
fatalError("Unexpected runtime error.")
}
By reading the price attribute of marsHabitatPricerOutput, you can obtain the predicted value and then display the result in the application UI.
let price = marsHabitatPricerOutput.price
priceLabel.text = priceFormatter.string(for: price)
Note:
The generated prediction (solarPanels: greenhouses: size :) method throws an exception. When using Core ML, most of the time you encounter an error: the input data type passed to the method is different from the expected input type of the model-for example, the image type in the wrong format. In the example, the expected input type is Double. All types of mismatched errors will be captured during compilation. If an error occurs, a fatal error will pop up in the example application.
Build and run a Core ML Application
Xcode compiles the Core ML Model into resources for optimization and running on devices. The optimized model representation is included in your application package and can be used for prediction when the application runs on the device.
Convert trained models to Core ML
Convert the trained model created by a third-party machine learning tool to the Core ML Model format.
Overview
If you have used a third-party machine learning tool to create and train models, as long as this tool is supported, you can use Core ML Tools to convert these models to the Core ML Model format. Table 1 lists our supported models and third-party tools.
Note:
Core ML Tools is a Python Package (coremltools) that is attached to the Python Package Index (PyPI. For more information about Python packages, see Python Packaging User Guide.
Model Conversion
You can use the Core ML converter and convert the model based on the corresponding third-party model tools. Call the convert method of the converter and save the result as the Core ML Model format (. mlmodel ).
For example, if your model is created using Caffe, you can pass the Caffe model (. caffemodel) to coremltools. converters. caffe. convert.
import coremltools
coreml_model = coremltools.converters.caffe.convert('my_caffe_model.caffemodel')
Then save the result as the Core ML Model format.
coreml_model.save('my_model.mlmodel')
Depending on your model, you may need to update the input, output, and related parameter tags, or you may need to declare the image name, type, and format. The conversion tool has built-in more detailed documentation, because the available options vary with the tool.
Alternatively, you can write custom conversion tools.
If the format to be converted is not in Table 1, you can create your own Conversion Tool.
Write custom conversion tools, including converting the input, output, and schema representation of the model to the Core ML Model format. You can perform this operation by defining the model architecture of each layer and the connection relationships between layers. You can use the conversion Tools provided by Core ML Tools as a reference; they demonstrate how to convert the model types created by various third-party Tools to the Core ML Model format.
Note:
The Core ML Model format is defined by a series of Protocol Buffer files. For details, see Core ML Model Specification.
Core ML API
Directly use the Core ml api to support custom workflows and more advanced use cases.
In most cases, you only need to interact with the interface dynamically generated by the model, that is, when you add the model to the Xcode project, this API is automatically created by Xcode. You can directly use the Core ml api to support custom workflows or more advanced use cases. For example, if you need to asynchronously collect input data to a custom struct for prediction, You can implement the MLFeatureProvider protocol for this struct, to provide the input function for the model.
For details about the API list, see Core ml api.