Deep learning reading list Deepin learning Reading list

Source: Internet
Author: User
Tags nets
Reading List

List of reading lists and survey papers:BooksDeep learning, Yoshua Bengio, Ian Goodfellow, Aaron Courville, MIT Press, in preparation.Review PapersRepresentation learning:a Review and New perspectives, Yoshua Bengio, Aaron Courville, Pascal Vincent, ARXIV, 2012. The monograph or review paper Learning deep architectures for AI (Foundations & Trends in Machine learning, 2009). Learning–a New Frontier in Artificial Intelligence research–a survey paper by Itamar Arel, Derek c. Rose, and Thomas P. Karnowski. Graves, A. (2012). Supervised sequence labelling with recurrent neural Networks (Vol. 385). Springer. Schmidhuber, J. (2014). Deep learning in neural networks:an overview. Pages, 850+ references, http://arxiv.org/abs/1404.7828, PDF & LATEX Source & Complete public BIBTEX file under Http://www.idsia.ch/~juergen/deep-learning-overview.html. LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature 521, No. 7553 (2015): 436-444.Reinforcement LearningMnih, Volodymyr, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller.  "Playing Atari with deep reinforcement learning." ARXIV preprint arxiv:1312.5602 (2013). Volodymyr Mnih, Nicolas heess, Alex Graves, Koray Kavukcuoglu. "Recurrent Models of Visual Attention" ArXiv e-print, 2014.Computer Vision ImageNet classification with deep convolutional neural Networks, Alex Krizhevsky, Ilya sutskever, Geoffrey E Hinton, NIPS Going deeper with convolutions, Christian szegedy, Wei Liu, yangqing Jia, Pierre sermanet, Scott Reed, Dragomir Ang Uelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, 19-sept-2014. Learning hierarchical Features for Scene labeling, Clement Farabet, Camille Couprie, Laurent Najman and Yann LeCun, IEEE T Ransactions on Pattern analysis and Machine Intelligence, 2013.  learning convolutional Feature hierachies for Visual recognition, koray Kavukcuoglu, Pierre sermanet, Y-Lan Bo Ureau, Karol Gregor, Michaël Mathieu and Yann LeCun, advances in Neural information processing Systems (NIPS 2010), 23, 20 Graves, Alex, et al.  "A novel connectionist system for unconstrained handwriting recognition."  pattern analysis and Machine Intelligence, IEEE Transactions on 31.5 (2009): 855-868. Cireşan, D. C., Meier, U., Gambardella, L. M., &Schmidhuber, J.  deep, big, simple neural nets for handwritten digit recognition. neural COMPUTATION,&NB Sp;22 (12), 3207-3220. Ciresan, Dan, Ueli Meier, and Jürgen schmidhuber.  "multi-column deep neural networks for image classification."  computer Vision and Pattern recognition (CVPR), IEEE Conference on. IEEE, 2012. Ciresan, D., Meier, U., Masci, J., & Schmidhuber, J. (+ July).  a Committee of Neural Networks for traffic sig N classification. in neural Networks (IJCNN), the International Joint Conference on  (pp. 1918-1921). Ieee. NLP and Speech Joint learning of Words and meaning representations for Open-text Semantic parsing, Antoine Bordes, Xavier Glorot, Jason Weston and Yoshua Bengio (+), in:proceedings of the 15th international Conference on Artificial Intelligence and Stati Stics (aistats) Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. Socher, R., Huang, E. H., Pennington, J., Ng, A. Y, and Manning, C. D (2011a) .  in NIPS ' 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. Socher, R., Pennington, J., Huang, E. H., Ng, A. Y, and Manning, C. D (2011b) .  in EMNLP ' 2011. Mikolov tomáš: statistical Language Models based on neural Networks. PhD Thesis, Brno University of Technology, 2012. Graves, Alex, and Jürgen Schmidhuber. "Framewise phoneme classification with bidirectional LSTM and other neural network architectures."  neural networks 18.5 (2005): 602-610. Mikolov, Tomas, Ilya sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean.  "distributed representations of words and phrases and their compositionality." in  Advances in Neural information processing Systems, pp. 3111-3119. K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, Y. bengio.  Learning Phrase repr Esentations using RNN Encoder-decoder for statistical machine translation. EMNLP 2014. Sutskever, Ilya, Oriol vinyals, and Quoc VV Le. "Sequence to Sequence learning with neural networks."  advances in Neural information processing Systems. 2014.disentangling factors and variations with DepthGoodfellow, Ian, et al. "Measuring invariances in deep networks." Advances in neural information Processing Systems 22 (2009): 646-654.  Bengio, Yoshua, et al. "Better Mixing via deep representations." ArXiv preprint arxiv:1207.4404 (2012). Xavier Glorot, Antoine Bordes and Yoshua Bengio, Domain adaptation for large-scale sentiment classification:a deep learni ng approach, in:proceedings of the Twenty-eight International Conference on machine learning (ICML ' one), pages 97-110, 201 1.Transfer Learning and domain adaptationRaina, Rajat, et al. "Self-taught learning:transfer learning from unlabeled data." Proceedings of the 24th International Conference on Machine Learning. ACM, 2007. Xavier Glorot, Antoine Bordes and Yoshua Bengio, Domain adaptation for large-scale sentiment classification:a deep learni ng approach, in:proceedings of the Twenty-eight International Conference on machine learning (ICML ' one), pages 97-110, 201 1. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu and P. Kuksa.Natural Language Processing (almost) from Scratch. Journal of machine learning, 12:2493-2537, 2011.  Mesnil, Grégoire, et al. "Unsupervised and transfer learning challenge:a deep learning approach." Unsupervised and Transfer learning Workshop, in conjunction with ICML. Ciresan, D. C., Meier, U., & Schmidhuber, J. (June). Transfer Learning for Latin and Chinese characters with deep neural networks. In neural Networks (IJCNN), the 1-6 International Joint Conference on (pp.). Ieee. Goodfellow, Ian, Aaron Courville, and Yoshua Bengio. "Large-scale Feature learning with Spike-and-slab Sparse Coding." ICML 2012.practical Tricks and Guides"Improving neural networks by preventing co-adaptation of feature detectors."  Hinton, Geoffrey E., et al. ArXiv preprint arxiv:1207.0580 (2012). Practical recommendations for gradient-based training of deep architectures, Yoshua Bengio, U. Montreal, ArXiv report:1206 .5533, lecture Notes in computer science Volume 7700, neural networks:tricks of the trade Second Edition, Editors:grégoi Re Montavon, Geneviève B. Orr, Klaus-robert Müller, 2012. A Practical Guide to training Restricted Boltzmann machines, by Geoffrey Hinton.Sparse CodingEmergence of Simple-cell receptive field properties by learning a sparse code for natural images, Bruno Olhausen, Nature 1 996. Kavukcuoglu, Koray, Marc ' Aurelio Ranzato, and Yann LeCun. "Fast inference in sparse coding algorithms with applications to object recognition." ArXiv preprint arxiv:1010.3467 (2010 ). Goodfellow, Ian, Aaron Courville, and Yoshua Bengio. "Large-scale Feature learning with Spike-and-slab Sparse Coding." ICML 2012. Efficient sparse coding algorithms. Honglak Lee, Alexis Battle, Raina Rajat and Andrew Y. Ng. In NIPS 19, 2007. PDF "Sparse coding with a overcomplete basis set:a strategy employed by VI.".  Olshausen, Bruno A., and David J. Field. Vision 37.23 (1997): 3311-3326.Foundation theory and motivation Hinton, Geoffrey E. "Deterministic Boltzmann Learning performs steepest descent in Weight-space."   Neural computation 1.1 (1989): 143-150. Bengio, Yoshua, and Samy Bengio. "Modeling high-dimensional discrete data with multi-layer neural networks."   Advances in Neural information Processing systems 12 (2000): 400-406. Bengio, Yoshua, et al. "Greedy layer-wise training of deep networks."   Advances in Neural information Processing systems 19 (2007): 153. Bengio, Yoshua, Martin Monperrus, and Hugo Larochelle. "Nonlocal estimation of manifold structure."   Neural computation 18.10 (2006): 2509-2528. Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. "Reducing the dimensionality of data with neural networks."   science 313.5786 (2006): 504-507. Marc ' Aurelio Ranzato, Y., Lan Boureau, and Yann LeCun. "Sparse feature learning for deep belief networks."   Advances in Neural information Processing systems 20 (2007): 1185-1192. Bengio, YoshuA, and Yann LeCun. "Scaling learning algorithms towards AI."   Large-scale Kernel machines 34 (2007). Le Roux, Nicolas, and Yoshua Bengio. "Representational power of restricted Boltzmann machines and deep belief networks."   Neural computation 20.6 (2008): 1631-1649. Sutskever, Ilya, and Geoffrey Hinton. "Temporal-kernel recurrent neural Networks."   Neural networks 23.2 (2010): 239-243. Le Roux, Nicolas, and Yoshua Bengio. "Deep belief networks is compact universal approximators."   Neural computation 22.8 (2010): 2192-2207. Bengio, Yoshua, and Olivier Delalleau. "On the expressive power of deep architectures."   Algorithmic learning theory. Springer Berlin/heidelberg, 2011. Montufar, Guido F., and Jason Morton. "When Does a Mixture of products contain a Product of mixtures?"   ARXIV preprint arxiv:1206.0387  (2012). Montúfar, Guido, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. The number of Linear regions of deep neural netwoRks. "ArXiv preprint arxiv:1402.1869 (2014). supervised Feedfoward neural Networks The manifold Tangent Classifier, Salah Rifai, Yann Dauphin, Pascal Vincent, Yoshua Bengio and Xavier Muller, In:nips ' 201 1. "Discriminative Learning of Sum-product Networks.", Gens, Robert, and Pedro Domingos, NIPS best Student Paper. Goodfellow, I., Warde-farley, D., Mirza, M., Courville, A., and Bengio, Y. ().  maxout networks. Technical report, Universite de Montreal. Hinton, Geoffrey E., et al. "Improving neural networks by preventing co-adaptation of feature detectors."   ARXIV preprint arxiv:1207.0580  (2012). Wang, Sida, and Christopher Manning. "Fast dropout training." in  Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 118-126. Glorot, Xavier, Antoine Bordes, and Yoshua Bengio. "Deep sparse rectifier networks." in  Proceedings of the 14th International Conference on Artificial Intelligence and Statistics. JMLR W&CP Volume, vol, pp. 315-323. ImageNet classification with deep convolutionalNeural Networks, Alex Krizhevsky, Ilya sutskever, Geoffrey E Hinton, NIPS 2012. Large Scale deep learningBuilding high-level Features Using Large Scale unsupervised learning Quoc v. Le, Marc ' Aurelio Ranzato, Rajat Monga, Matthi EU Devin, Kai Chen, Greg S. Corrado, Jeffrey Dean, and Andrew Y Ng, ICML 2012. Bengio, Yoshua, et al. "Neural probabilistic language models." Innovations in Machine Learning (2006): 137-186.  Specifically section 3 of this paper discusses the asynchronous SGD. Dean, Jeffrey, et al.  "Large scale distributed deep networks." Advances in neural information processing Systems. 2012.Recurrent Networks Training Recurrent neural Networks, Ilya sutskever, PhD thesis, 2012. Bengio, Yoshua, Patrice Simard, and Paolo frasconi.  "learning long-term dependencies with gradient descent is Difficu Lt. "  neural Networks, IEEE Transactions on 5.2 (1994): 157-166. Mikolov tomáš: statistical Language Models based on neural Networks. PhD Thesis, Brno University of Technology, 2012. Hochreiter, Sepp, and Jürgen Schmidhuber.   "Long short-term memory."   Neural computation 9.8 (1997): 1735-1780. Hochreiter, S., Bengio, Y, Frasconi, P., & Schmidhuber, J. 2001 Flow in  gradient recurrent nets:the Ty of learning long-term dependencies. Schmidhuber, J. (1992).  learning complex, extended sequences using the principle of history compression. neural Computation, 4 (2), 234-242. Graves, A., Fernández, S., Gomez, F., & Schmidhuber, J. (2006, June).  connectionist Temporal Classification:labe lling unsegmented sequence data with RecurrenT neural networks. In proceedings of the 23rd International Conference on Machine learning  (pp. 369-376). Acm.Hyper Parameters"Practical Bayesian optimization of machine learning Algorithms", Jasper Snoek, Hugo Larochelle, Ryan Adams, NIPS 2012. Random Search for hyper-parameter optimization, James Bergstra and Yoshua Bengio (+), in:journal of machine learning R Esearch, 13 (281–305). Algorithms for hyper-parameter optimization, James Bergstra, Rémy bardenet, Yoshua Bengio and Balázs Kégl, In:nips ' 2011, 2011.optimization Training deep and recurrent neural Networks with hessian-free optimization, James Martens and Ilya sutskever, neural netw Orks:tricks of the trade, 2012. Schaul, Tom, Sixin Zhang, and Yann lecun.  "No more pesky learning Rates."  ARXIV preprint arxiv:1206.1106  (2012). Le Roux, Nicolas, Pierre-antoine Manzagol, and Yoshua Bengio. "Topmoumoute online natural gradient algorithm."  neural Information Processing Systems (NIPS). Bordes, Antoine, Léon Bottou, and Patrick Gallinari. "Sgd-qn:careful Quasi-Newton stochastic gradient descent."  the Journal of machine learning Research 10 (2009): 1737-1754. Glorot, Xavier, and Yoshua bengio.  "Understanding the difficulty of training deep feedforward neural networks."  proceedings of the international Conference on Artificial Intelligence and Statistics (Aistats ' 10). Society for Artificial Intelligence and Statistics. Glorot, Xavier, Antoine Bordes, and Yoshua Bengio. "Deep Sparse rectifier Networks." proceedings of the 14th international Conference on Artificial Intelligence and Statistics. JMLR W&CP Volume. Vol. 15. "Deep learning via Hessian-free optimization."   Martens, James. Proceedings of the 27th International Conference on Machine Learning (ICML). Vol. 951. Hochreiter, Sepp, and Jürgen schmidhuber. 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.