Representive learning:a Review and New respective

Source: Internet
Author: User


This is an article published in 2014, the author analyzes why deep learning can have good expressive ability, so as to be able to get state-of-the-art in image classification, object recognition, image segmentation and target tracking.

The authors have analyzed the transcendental, smoothness and dimensionality disasters, distributed expression, depth and abstraction, and disentangling factors of variation from five aspects of expression Learning (Representatin learning).

Especially for the 4th, the depth and the abstraction, the author thinks that the deepening of the network can ensure the re-use of the features, and the depth structure can be abstracted by layers, thus achieving certain invariance.

The author first started from a single-layer network, first of all, probabilistic, auto-encoder and manifold learning three aspects to explain the characteristics of learning.

From the perspective of probabilistic modeling, feature learning is looking for a set of potentially economical random variables that can describe the distribution of observational data. This assumes that the observed data is x, and the potential variable is H, so we have to infer that the distribution is P (h|x), which is a posteriori probability. The probabilistic graph model can be modeled from the non-direction graph and the graph two aspects, but this method is generally more complex, and the parameters are very difficult to deal with.

Then the author introduces the simple principle of auto-encoder, introduces the sparse auto-encoder,denosing encoder and contractive Auto-encoder respectively, and their difference is that the regular term of the objective function is different.

Finally, the author interprets the expression learning from the perspective of manifold. From a personal point of view, the sense manifold is very important theory, this theory directly promoted the development of the sparse expression theory, the birth of the model countless (including I sent an article paper). The precondition of manifold theory is the manifold hypothesis, in which the high-dimensional data in the real world can be concentrated on a low-dimensional manifold embedded therein. So in the expression learning, we try to learn from the input of high-level data such a low-dimensional expression, so in unsupervised learning, we try to data support manifold learning, to find a parametric mapping out. The following describes the centralized parameterization of the mapping method, the first is the neighborhood graph method, the second is a nonlinear manifold method, the third is the use of tangent space.

Finally, the most important problem in the training of deep models is the problem of overfitting. In order to improve the generalization ability of the model, the original training data is deformed (data augmentation). At the same time, convolutional neural networks and pooling operations can maintain the topological structure of input data, and patch-based training is helpful for training.





Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.