Machine Learning Projects must Pass Several Hardships

Source: Internet
Author: User
Keywords machine learning machine learning introduction machine learning algorithms
With the development of the field of machine learning and the technology itself, the stages and workflow involved in the project are also constantly evolving.

The emergence of GPU-enabled mobile devices has introduced a new stage in the workflow of traditional machine learning projects. The emergence of a new stage has created new roles and positions.

 

The goals of this article:

Detailed analysis of each stage in the machine learning project.
The roles involved in each stage.
The final result delivered after each stage.
 
Let's enter the text below.
 
Problem definition

   
Problem definition is the first stage of a computer vision/machine learning project. The focus is to fully understand the problems that need to be solved by machine learning.
 
Usually this stage requires a person who describes the problem, records the problem to be solved in a specified form, and describes in detail the personal experience in each scene.
 
This stage also needs to capture the ideal solution from the perspective of the person describing the problem.
 
The person describing the problem can be a customer, user, or colleague.
 
The deliverables at this stage are documents (word or pdf), which include (but are not limited to) the following:
 
Problem statement
The ideal solution
Understanding and insight into the problem
skills requirement
 
Related role: IT business analyst
 


the study

    
This stage is the basis for subsequent stages (realization plan and development work, etc.).
 
At this stage, it is necessary to explore the application form of the solution, and to study the data structure, format and source of the information.
 
The understanding of the problem, the proposed solution, and the combination of available data can help us choose a suitable machine learning model and finally realize the ideal solution.
 
At this stage, we need to study the hardware and software required for algorithm and model implementation in order to save a lot of time in the subsequent stages.
 
The deliverables at this stage are documents (word or pdf), which include research on the following:
 
Data structure and source
The form of the solution
Neural network/model architecture
Algorithm
Hardware requirements
Software requirements
 
Related positions: machine learning researcher, data scientist, AI researcher, etc.
 


Data aggregation/mining/crawling

              
Data is the driving force for machine learning and computer vision applications. Among them, data aggregation is a crucial step, which can lay the foundation for the efficiency and performance of the model.
 
The output of the solution defines the aggregation of the data.
 
Data understanding is very important. Data from any source can be inspected and analyzed using visualization tools or statistical methods.
 
Data inspection improves the integrity and credibility of data by confirming the source of the data.
 
Data analysis and exploration also need to meet the following requirements:
 
The collected data needs to be diverse enough to ensure that the predictive ability of the model can be adapted to various situations.
The collected data needs to be fair to ensure that the model can be correctly summarized in the inference process.
The data collected must be abundant.
 
There are various tools for collecting data. The data source can be in the form of API, XML, CSV or Excel files. In some cases, we also need to mine or grab data from online resources. Before crawling, please check the crawling/mining policies of third-party websites.
 
The deliverable of this stage is a folder containing the original data and subfolders containing annotation files.
 
Related positions: data scientist, data analyst.
 


Data preparation/preprocessing/enhancement

              
The data preprocessing step is mainly based on model input requirements. Looking back at the research phase, think again about the input parameters and requirements for the model/neural network architecture.
 
The preprocessing step converts the raw data into a format that can successfully train the model.
 
Data preprocessing includes (but is not limited to) the following steps:
 
Reformatting data, including adjusting image size, modifying color channels, reducing noise, image enhancement, etc.
Data cleaning
Data standardization
 
Data enhancement is a step performed to improve the diversification of acquired data. The enhancement of image data can take the following forms:
 
Rotate the image to any angle
Zoom in or zoom out image
Crop image
Flip (horizontal or vertical) image
Mean subtraction
 
The deliverable of this stage is a folder containing subfolders labeled "training", "testing" and "verification" and annotation files in each subfolder.
 
Related position: Data Scientist
 


Implementation of the model

              
Usually, we can use the ready-made models provided by various online resources to simplify the implementation of the model. Most machine learning and deep learning frameworks (such as PyTorch or TensorFlow) provide pre-trained models that can be used to accelerate the implementation phase of the model.
 
These pre-trained models have been trained on powerful data sets and have reached the performance and structure of the latest neural network architecture.
 
Generally, we rarely need to implement a model from scratch. The following tasks need to be completed in the model realization stage:
 
Delete the last layer in the neural network and change the purpose of the model to a specific task. For example, by deleting the last layer of the Resnet neural network architecture, the trained model can be used in the encoder-decoder neural network architecture.
Fine-tune the pre-trained model
 
The deliverable of this stage is a model ready for training.
 
Related positions: data scientist, machine learning engineer, computer vision engineer, NLP engineer, AI engineer.
 


training

               
In the training phase, we will use the data provided in the previous data phase to train the model. The realization of model training includes passing the aggregated training data to the model to create a model capable of performing specialized tasks.
 
The training model needs to pass the training data to the model in batches, and then iterate a specified number of epochs. In the early stages of training, the performance and accuracy of the model may not be very satisfactory. But as the model continues to perform predictions, compare the predicted value with the expected value, and propagate back in the neural network, gradually the model can be improved to better complete its tasks.
 
Before training begins, we must set hyperparameters and network parameters to control the efficiency of the model training phase.
 
Hyperparameters: Values defined before the start of neural network training. A positive training result is obtained by initializing the control neural network. They will affect the algorithms of machine learning and deep learning, but they will not be affected by the algorithms. Their values will not change during training. Examples of hyperparameters include regularization value, learning rate, number of layers, etc.
 
Network parameters: This is the part of the neural network that will not be manually initialized. They are values inside the neural network and are directly controlled by the neural network. An example of a network parameter is the weight within a neural network.
 
During training, it is very important to record the indicators of each training process and each epoch. The metrics we usually need to collect are as follows:
Training accuracy
Verify accuracy
Training loss
Verification loss
 
In order to organize and visualize training indicators, we can use visualization tools such as Matplotlib and Tensorboard.
 
We can identify some common training pitfalls of machine learning models by visualizing training indicators, such as underfitting and overfitting.
 
Underfitting: This happens when the machine learning algorithm cannot learn the patterns in the data set. We can use algorithms or models that are more suitable for the task to solve this problem. You can also fix the underfitting problem by identifying more features in the data and presenting them to the algorithm.
Overfitting: This problem refers to the algorithm over-considering the observed patterns during training when predicting new patterns. This will cause the machine learning algorithm to be unable to accurately summarize data that has not been seen. If the training data cannot accurately represent the distribution of the test data, overfitting may occur. We can solve the problem of overfitting by reducing the number of features in the training data and reducing the complexity of the network through various techniques.
 
The deliverables of this stage are the developed models and training indicators.
 
Related positions: Data Scientist, Machine Learning Engineer, Computer Vision Engineer, NLP Engineer, AI Engineer
 


Evaluation

              
At this stage, you already have a well-trained model, and then you need to evaluate the performance of the model.
 
We need to use "test data" to evaluate the model. During training, test data cannot be shown to the model. The test data represents an example of the data that should be in the actual situation.
 
We can use the following evaluation strategies:
 
Confusion matrix (error matrix): Provides a visual representation of the number of matches or mismatches between the actual classification and the result of the classifier. The confusion matrix is usually expressed in the form of a table, the rows represent the real observation results, and the columns represent the inference results of the classifier.
Accuracy and recall: These two are performance indicators, used to evaluate classification algorithms, visual search systems, etc. Taking the evaluation of a visual search system (finding similar images based on the query image) as an example, the accuracy rate can reflect the number of relevant results returned, and the recall rate represents the number of relevant results returned in the data set.
 
The deliverable at this stage is a document containing the evaluation results and the output of the evaluation strategy.
 
Related positions: data scientist, machine learning engineer, computer vision engineer, NLP engineer, AI engineer.
 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.