Unsupervised learning features-Sparse Coding, deep learning, and ICA represent one of the documents

Source: Internet
Author: User
Tags nets

Reproduced http://blog.csdn.net/zhoutongchi/article/details/8191991  

 Learning ing functions and literature applied in behavior recognition/image classification (models and non-models are associated with each other, and algorithms are mutually adopted. There is no clear distinction between them, including the bionic literature)

%The research focuses on ICA model and deep learning with sparse encoding.

1) sparse encoding (sparse encoding, automatic encoding, and recursive encoding ):

[1] B. olshausen and D. Field. emergence of simple-cell semantic tive attributes by learning a sparse code for natural images. Nature, 1996.

[2] H. Lee, A. Battle, R. Raina, and A. Y. Ng. EF extends cient Sparse Coding algorithms. In nips, 2007.

[3] B. A. olshausen. Sparse Coding of time-varying natural images. In Ica, 2000.

[4] Dean, T ., corrado, G ., washington, R.: recursive sparse spatiotemporal coding. in: Proc. IEEE Int. workshop on mult. INF. proc. and RETR. (2009 ).

[5] J. Yang, K. Yu, Y. Gong, and T. Huang. linear spatial pyramid Matching Using Sparse Coding for Image

Classi fetch cation. In cvpr, 2009.

[6]S. Wang, L. Zhang, Y. Liang and
Q. Pan. Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch image synthesis. In cvpr 2012.

[7] Yan Zhu, Xu Zhao, Yun Fu, yuncai Liu. sparse Coding on local spatial-temporal volumes for human action recognition. accv2010, Part II, lncs 6493. (Shanghai Jiao Tong University uses the 3dhog feature description, which is not noticed by 3dsift Sparse Coding ).

2) ICA (ISA) model:

[1] A. Hyvarinen, J. Hurri, and P. Hoyer. Natural Image statistics. Springer, 2009.

[2] Alireza Fathi and Greg Mori. Action recognition by learning mid-level motion features. IEEE, 2008,978-1-4244-2243.

[3] a. Hyvarinen and P. Hoyer. emergence of Phase-and shift-invariant features by decomposition of natural images into independent feature subspaces.
Neu. comp .., 2000.

[4] a. Coates, H. Lee, and A. Y. Ng. An analysis of single-layer networks in unsupervised feature learning.

In aistats 14,201 1. (This article uses features and does not require Detection Based on the bow method. Image Block similarity clustering is used to form a bow. The image block features are nonlinear determined from the bow distance, and then the positive and secondary images are represented in a sparse form)

[5] Q. V. Le, W. Zou, S. Y. Yeung, and A. Y. Ng. Learning hierarchical spatio-temporal features for action

Recognition with independent subspace analysis. In cvpr, 2011.

[6] Q. V. Le, J. ngiam, Z. Chen, D. Chia, P. W. Koh, and A. Y. Ng. tiled convolutional neural networks. In

NIPS, 2010.

[7] M. S. Lewicki and T. J. sejnoski. Learning overcomplete representations. Neural computation, 2000.

[8] L. Ma and L. Zhang. overcomplete topographic independent component analysis. Elsevier, 2008.

[9] a. krizhevsky. Learning multiple layers of features from tiny images. Technical Report, U. Toronto, 2009.

% Non-Classification Identification literature, introduction of Markov estimation subspaces, new feature combinations

[10] Nicolas Brunel, Wojciech pieczynski, steane derrode. copulas in vectorial hidden Markov chains for Multicomponent images segmentation. icassp '05, Philadelphia, USA, March
19-. (non-Identification Classification literature, but involves an algorithm that is useful for estimating subspaces. The ICA model can be introduced .)

[11] Xiaomei Qu. Feature Extraction by combining independent subspaces analysis and von techniques. International Conference on System Science
And engineering, 2012.

[12] Pietro berkes, Frank Wood and Jonathan pillow. characterizing neural dependencies with the copo models. In nips, 2008.

[13] Y-lan boureau, Jean Ponce, Yann lecun. A Theoretical Analysis of feature pooling in visual recognition. In Proceedings of the 27 'th International
Conference on machine learning, Haifa, Israel, 2010. (introduces various pools and concepts to form sparse representation and generate robust features)

3) Deep Learning (which is highly correlated with ICA and RBM and belongs to multi-layer learning ):

[1] Y. bengio, P. lamblin, D. popovici, and H. larochelle. Greedy layerwise training of deep networks. In nips, 2006.

[2] Alessio plebe. A model of the response of visual area V2 to combinations of orientations. Network: Computation in neural systems, September 2012;
23 (3): 105-122 (involving simulated human cerebral cortex awareness (V1, V2, V3, V4, V5). There are many such documents, mainly for monkey and CAT experiments)

[3] G. Hinton, S. osindero, and Y. Teh. A fast learning algorithms for deep belief nets. Neu. Comp., 2006

[4] H. Lee, R. Grosse, R. ranganath, and A. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations.
In icml, 2009.

[5] Yann lecun, koray kavukcuoglu, And Clement farabet. Convolutional networks and applications in vision. In Proc. International Symposium on Circuits
And Systems (iscas '10), 2010.

[6] Pierre sermanet, soumith chintala And Yann lecun. Convolutional Neural Networks Applied to house numbers digit classi merge cation. Computer Vision
And pattern recognition, 2012.

[7] Quoc v. Le. Marc 'aurelio ranzato. Rajat Monga. Matthieu Devin. Kai CHEN. Greg S. Corrado. Je restartDean.
Andrew Y. Ng. building high-level features using large scale unsupervised learning. The 29 'th International Conference on machine learning, Edinburgh, Scotland, UK, 2012.

[8] a. Hyvarinen and P. Hoyer. Topographic independent component analysis as a model of V1 organization and valid tive variable ELDS. Neu. Comp., 2001

[9] Y. bengio, P. lamblin, D. popovici, and H. larochelle. Greedy layerwise training of deep networks. In

NIPS, 2007.

[10] Q. v. le, J. ngiam,. coates,. lahsiri, B. prochnow, and. y. ng. on Optimization Methods for deep learning. in icml, 2011.

[11] H. Lee, C. ekanadham, and A. Y. Ng. sparse deep belief net model for visual area V2. in Nips, 2008.

[12] G. E. Hinton, S. osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 2006.

[13] Jarret, K., kavukcuoglu, K., ranzato, M., lecun, Y.: What is the best multistage architecture for Object Recognition? In: iccv. (2009) 2146-2153.

[14] Lee, H., ekanadham, C., NG., A.: sparse deep belief net model for visual area v2.in: nips. (2008) 873-880.

[15] BO Chen. Deep Learning of invariant spatio-temporal feature from video. [D]. 2010.

[16] jiquan ngiam, zhenghao Chen, Pang Wei Koh, Andrew Y. Ng. Learning deep energy models. In Proceedings of the 28 'th International Conference on Machine
Learning, Bellevue, WA, USA, 2011.

The following models (crbm and SF) do not focus on research, but can use algorithms for reference.

4) crbm (many documents, with doctoral thesis ):

[1] G. Hinton. A practical guide to training restricted Boltzmann machines. Technical Report, U. of Toronto,

2010.

[2] G. Taylor, R. Fergus, Y. lecun, and C. bregler. Convolutional learning of spatio-temporal features. In eccv, 2010.

[3] norouzi, M., Ranjbar, M., Mori, G.: stacks of Convolutional restricted Boltzmann machines for shift-invariant feature learning. In: cvpr. (2009 ).

[4] memisevic, R., Hinton, G.: Learning to represent spatial transformations with factored higher-order Boltzmann machines. Neural comput 2010.

 

5) Slow
Feature (slow feature learning analysis (Germany), representative literature)

This new method is a new idea based on adjacent frames.

[1] P. berkes and L. wiskott. Slow feature analysis yields arich repertoire of complex cell properties. Journal of vision, 2005

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.