This afternoon, idle to nothing, so Baidu turned to see the recent on the pattern recognition, as well as the latest progress in target detection, there are a lot of harvest!
------------------------------------AUTHOR:PKF
-----------------------------------------------time:2016-1-20
--------------------------------------------------------------qq:1327706646
1. The nature of deep learning
2. The effect of deep learning on the detection of traditional transcendental feature targets
3.rcnn-fcnn,caffe
4. The difference between fitting and interpolation
5. Fitting, approximation, interpolation, differential, limit ... These are the virtual reality of the sequence of means!!! Laplace, Lagrange, Fourier are outstanding French all-round talents in the 19th century, and the same era of Euler, in the celestial mechanics, there are applications, Gauss with the least squares to find the unknown position of the planet, Lagrange used Lagrange multiplication to find the Lagrange point in the solar system
1. The nature of deep learning
The depth was re-discovered in 2012, Hinton's research team won the imagenet by deep learning, and the deep learning model, represented by CNN, is now a bit exaggerated, borrowed from the Chinese University of Hong Kong Prof. Xiaogang Wang Teacher's summary article, Deep learning is nothing more than the traditional machine feature learning in the training to achieve automatic grading function, in feature extraction to achieve a multi-level unit module, so that learning and feature extraction more generalization more comprehensive, so as to achieve deep learning effect!
http://blog.csdn.net/linj_m/article/details/46351053
From a mathematical point of view, the key to deep learning is to successfully separate these factors through multilayer nonlinear mapping, for example, in the last hidden layer of the depth model, different neurons represent different factors. If this hidden layer is characterized as a feature, face recognition, attitude estimation, expression recognition, age estimation will become very simple, because the various factors become a simple linear relationship, no longer interfere with each other.
Through the iteration of the linear element, the ascending dimension, the formation of non-linear machine learning polynomial, and the polynomial, but also can be expressed as a matrix vector, if the periodic function can be expressed by the Taylor Formula trigonometric functions, that is, the famous Fourier transform, so ultimately, polynomial convex function, optimization problem, and polynomial fitting in prediction; common fitting with logistic regression, least squares, Lagrange multiplication
Or that sentence, the most beautiful things are the most concise, the same best machine learning algorithm deep learning is the same
2. The effect of deep learning on the detection of traditional transcendental feature targets
The deep learning model implies that the neural network has a deep structure and consists of many layers. Other commonly used machine learning models, such as support vector machines and boosting, are shallow-layer structures. It is theoretically proven that the three-layer neural network model (including input layer, output layer, and an implicit layer) can approximate any classification function. So why do we need a deep model?
Theoretical studies show that, for specific tasks, if the depth of the model is not enough, the computational units required will increase exponentially. This means that although shallow models can express the same classification functions, they require much more parameters and training samples. The shallow-layer model provides a local representation. It divides the high-dimensional image space into several local regions, and each local area stores at least one template obtained from the training data. The shallow model matches one test sample to another and predicts its category based on the matching results. For example, in the support vector machine model, these templates are support vectors; in the nearest neighbor classifier, these templates are all training samples. As the complexity of classification problem increases, the image space needs to be divided into more and more local regions, so more and more parameters and training samples are needed.
The key to reducing the parameters of the depth model is to reuse the computational units of the middle layer. For example, it can learn hierarchical feature representations for face images. The bottom layer can learn the filter from the original pixel, depict the local edge and texture features, and by combining the various edge filters, the middle filter can describe different types of human face organs; The highest level describes the overall character of the entire human face. Deep Learning provides a distributed representation of features. In the highest hidden layer, each neuron represents an attribute classifier, such as male and female, ethnic and hair color, and so on. Each neuron divides the image space into two, and the combination of N neurons can express 2N local regions, while a shallow model is needed to represent at least a 2N template. As a result, we can see that the depth model is more expressive and more efficient.
3.rcnn-fcnn,caffe
Regions with CNN features
http://blog.csdn.net/kuaitoukid/article/details/46740477
https://www.zhihu.com/question/27982282/answer/39350629 based on Caffe implementation of RCNN
Http://www.cnblogs.com/louyihang-loves-baiyan/p/4885659.html?utm_source=tuicool&utm_medium=referral FCNN
http://caffe.berkeleyvision.org/Berkeley
4. The difference between fitting and interpolation
Interpolation is polynomial, and fitting is a super-plane linear equation,
Polynomial fitting refers to the approximation of a function with a polynomial function,
The common method is to use the Taylor formula to expand the function into Lagrange series McLoughlin series and so on, and finally to the polynomial vector, and then through the matrix to calculate the result
http://bbs.csdn.net/topics/380043166 Matrix Calculation Fitting
http://blog.csdn.net/jairuschan/article/details/7517773/Fitting method
Http://zhidao.baidu.com/link?url=OImO6cRZ8NVvYqeiPG60C6OHbUOPfgXHYJfRuWLtu3aL1N9PQZN26YcJqO4r3BzesCxs3vOBVif9E7QGPDP8Za
http://blog.csdn.net/yihaizhiyan/article/details/7579506 linear discriminant
HTTP://BLOG.CSDN.NET/LINOI/ARTICLE/CATEGORY/1406179/2 J
Baidu's search based on HOODP, and Google is to give up the HOODP, own a set of frameworks than HOODP more advanced!
Http://baike.baidu.com/link?url=MkxgXZLe_ Kspwcgqdi7eqvq6tslp6kreg3jdaib2coxuutu0-0tmidne2gw1b9qfhuxi-86hotr3hoypftdyla Fourier transform
http://daily.zhihu.com/story/3939307
http://www.guokr.com/post/463448/Fourier dynamic Graph effect
Http://blog.sina.com.cn/s/blog_6163bdeb0102ehhg.html Lagrange
http://blog.csdn.net/linj_m/article/details/16964461 least Squares
http://blog.csdn.net/yihaizhiyan/article/details/7579506 LDA---Fish
http://blog.csdn.net/ttransposition/article/details/41806601 DPM
http://blog.csdn.net/carson2005/article/details/6292905 LPB
http://blog.csdn.net/carson2005/article/details/7841443 Hog
Http://www.cnblogs.com/louyihang-loves-baiyan/p/4839869.html RCNN
http://www.deepmind.com/
Http://36kr.com/p/220012.html
http://blog.csdn.net/linj_m/article/details/9897839
Http://baike.baidu.com/link?url=5Swq06zW7f0abml0EyQff2fl8MRFX3IPI3-NPIkABGQKJ3LIIE097gCbfYcxcCCocWI8BaUZgI4xIzc4R1gZpa
Http://365ykt.cn/index.aspx Ykt
My view on deep learning---deep learning of machine learning