Good memory is not as bad as writing, has always been only written to learn the habit of notes, has never written a blog. Now it is an honor to join the Zhejiang University Student AI Association, determined to follow the excellent teachers and seniors learn the AI field related technology, but also for the operation and Development of the association to contribute strength. Since September, because the scientific research needs to add a strong personal interest, has been insisting on learning machine learning, deep learning-related knowledge. Today, I am also responsible for the association of Deep Learning Paper Archive This piece of the task, as the Association of resources to facilitate the access of members of the study. Written notes inconvenient resource sharing, so began to write a blog, just beginning to try, such as the blog is not appropriate to look at Haihan. I hope this blog can give deep learning people interested in some of the paper readings on the reference, less detours. This blog will accompany my learning process is not regularly updated, in today's deep learning research results in the era of output, deep learning papers published and miscellaneous, if there are errors please contact me, of course, if there is a better paper recommendation, please also inform, greatly appreciated.
At the beginning of everything, this blog's original paper, mainly from other people's Csdn, blog Park, GitHub and other personal blogs or home page to sort out. The current content is mainly from our association President Gallant Seniors Blog, here to express our thanks. Links to related references I will give at the end of the text. As I have read the paper, I will try to read every good paper I write reading notes (finishing), if there are errors, but also to correct.
Fundamentals of Deep learning
Hecht-nielsen R. Theory of the BackPropagation neural network[j]. Neural Networks, 1988, 1 (Supplement-1): 445-448. (BP neural network) [PDF] Hinton G E, Osindero S, Teh Y W. A Fast Learning algorithm for deep belief nets. [J]. Neural Computation, 2006, 18 (7): 1527-1554. (beginning of deep learning DBN) [PDF] Hinton G E, Salakhutdinov R R reducing the Dimensionali Ty of data with neural networks. [J]. Science, 2006, 313 (5786): 504-7. (Self-encoder dimensionality reduction) [PDF] Ng A. Sparse Autoencoder[j]. CS294A Lecture Notes, 2011, 72 (2011): 1-19. (Sparse self-encoder) [PDF] Vincent P, Larochelle H, Lajoie I, et al. stacked denoising Autoe Ncoders:learning useful representations in a deep network with a local denoising criterion[j]. Journal of machine Learning (DEC): 3371-3408. (Stacked self-encoder, SAE) [PDF] deep learning outbreaks: from alexnet to capsules Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems. (AlexNet) [PDF] Simonyan, Karen, and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ArXiv preprint arxiv:1409.1556 (vggnet) [PDF] szegedy, Christian, et al going deeper with convolutions. Proceedings of the IEEE Conference on computer Vision and Pattern recognition. 2015. (googlenet) [PDF] Szegedy C, Vanhoucke V, Ioffe S, et al rethinking the Inception Architecture for computer Vis ION[J]. Computer Science, 2015:2818-2826. (INCEPTION-V3) [PDF] He, Kaiming, et al. Deep residual learning for image recognition. ArXiv preprint arxiv:1512.03385. (ResNet) [PDF] Chollet f. xception:deep learning with depthwise separable Convolut IONS[J]. ArXiv preprint arxiv:1610.02357, (xception) [PDF] Huang G, Liu Z, Weinberger K Q, et al. densely Connected Convolutio NalNETWORKS[J]. 2016. (densenet) [PDF] squeeze-and-excitation Networks. (Senet) [PDF] Zhang x, Zhou x, Lin M, et al shufflenet:an extremely efficient convolutional neural network for Mobile Dev ICES[J]. ArXiv preprint arxiv:1707.01083 (shufflenet) [PDF] Sabour S, Frosst N, Hinton G E. Dynamic routing between capsules[ C]. Advances in neural information processing Systems. 2017:3859-3869. (Capsules) [PDF]a very useful tricks in deep learningSrivastava N, Hinton G E, Krizhevsky A, et al dropout:a simple-to-prevent neural networks from overfitting[j]. Journal of machine Learning (1): 1929-1958. (dropout) [PDF] Ioffe S, Szegedy c. Batch Normalization:accel Erating Deep Network training by reducing internal covariate shift[j]. ArXiv preprint arxiv:1502.03167. (Batch normalization) [PDF] Lin M, Chen Q, Yan S. Network in Network[j]. Computer Science, (Global average pooling) [PDF] Recurrent neural network RNNMikolov T, Karafiát M, Burget L, et al recurrent neural network based language model[c]. Interspeech. 2:3. (RNN and language model combined with more classic articles) [PDF] Hochreiter S, Schmidhuber J. Long short-term memory[j]. Neural Computation, 1997, 9 (8): 1735-1780. (Mathematical principles of LSTM) [PDF] Chung J, Gulcehre C, Cho K H, et al. empirical Evaluation of GA Ted recurrent neural networks on sequence modeling[j]. ArXiv preprint arxiv:1412.3555 (GRU network) [PDF] generate anti-network Gan Goodfellow I, Pouget-abadie J, Mirza M, et al generative adversarial nets[c]. Advances in neural information processing systems. 2014:2672-2680. (GAN) [PDF] Mirza M, Osindero S. Conditional generative adversarial nets[j]. ArXiv preprint arxiv:1411.1784, (Cgan) [PDF] Radford A, Metz L, Chintala S. Unsupervised representation learning with Deep convolutional generative adversarial networks[j]. ArXiv preprint arxiv:1511.06434. (Dcgan) [PDF] Denton E L, Chintala S, Fergus R. Deep generative Image Models using a Laplacian Pyramid of adversarial networks[c]. Advances in neural information processing systems. 2015:1486-1494. (Lapgan) [PDF] Chen X, Duan Y, Houthooft R, et al. Infogan:interpretable representation learning by Inform ation maximizing generative adversarial nets[c]. Advances in neural information processing Systems. 2016:2172-2180. (Infogan) [PDF] Arjovsky M, Chintala S, Bottou L. Wasserstein gan[j]. ArXiv preprint arxiv:1701.07875 (Wgan) [PDF] Zhu J Y, Park T, Isola P, et al. unpaired image-to-image translation using Cycle-consistent adversarial networks[j]. ArXiv preprint arxiv:1703.10593 (Cyclegan) [PDF] Yi Z, Zhang H, Gong P T. dualgan:unsupervised Dual Learning for Im Age-to-image Translation[j]. ArXiv preprint arxiv:1704.02510 (Dualgan) [PDF] Isola P, Zhu J Y, Zhou T, et al image-to-image translation with con Ditional adversarial networks[j]. ArXiv preprint arxiv:1611.07004 (Pix2pix) [PDF] Migration LearningFei-fei L, Fergus R, Perona P. One-Shot learning of Object Categories[j]. IEEE Transactions on pattern analysis and Machine Intelligence, 2006, 4:594-611. (One shot learning) [PDF] Larochelle H , Erhan D, Bengio Y. Zero-data learning of New TASKS[J]. 2008:646-651. (Zero shot learning) [PDF] Target Detection Szegedy C, Toshev A, Erhan D. Deep neural networks for object Detection[c]. Advances in neural information processing Systems. 2013:2553-2561. (Early detection of objects in deep learning) [PDF] Girshick, Ross, et al Rich feature hierarchies for accurate object detection and Seman Tic segmentation. Proceedings of the IEEE Conference on Computer vision and pattern recognition. (R-CNN) [PDF] He K, Zhang X, Ren S, et al. Spatial pyramid Pooling in deep convolutional networks for visual Recogniti ON[C]. European Conference on Computer Vision. Springer International Publishing, 2014:346-361. (sppnet) [PDF] Girshick R. Fast R-cnn[c]. Proceedings of the IEEE International Conference on computer Vision. 2015:1440-1448. (Fast r-cnn) [PDF] Ren S, He K, Girshick R, et al Faster r-cnn:towards Real-time object detection with RE Gion proposal Networks[c]. Advances in neural information processing systems. 2015:91-99. (Faster r-cnn) [PDF] Redmon J, Divvala S, Girshick R, et al. You have look once:unified, real-time object detection[C]. Proceedings of the IEEE Conference on computer Vision and Pattern recognition. 2016:779-788. (YOLO) [PDF] Liu W, Anguelov D, Erhan D, et al. Ssd:single shot Multibox Detector[c]. European Conference on Computer Vision. Springer International Publishing, 2016:21-37. (SSD) [PDF] Li Y, He K, Sun J. R-fcn:object detection via region-based full Y convolutional networks[c]. Advances in neural information processing Systems. 2016:379-387. (R-FCN) [PDF] Semantic SegmentationLong J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[c]. Proceedings of the IEEE Conference on computer Vision and Pattern recognition. 2015:3431-3440. (most classic FCN) [PDF] Chen L C, Papandreou G, Kokkinos I, et al. deeplab:semantic image segmentation with deep C Onvolutional nets, atrous convolution, and fully connected crfs[j]. ArXiv preprint arxiv:1606.00915, (Deeplab) [PDF] Zhao H, Shi J, Qi X, et al Pyramid scene parsing network[j]. ArXiv preprint arxiv:1612.01105, (pspnet) [PDF] He K, Gkioxari G, Dollár P, et al Mask R-cnn[j]. ArXiv preprint arxiv:1703.06870 (MASK r-cnn) [PDF] Hu R, Dollár P, He K, et al learning to Segment every thing[j]. ArXiv preprint arxiv:1711.10370 (Mask r-cnn enhanced version) [PDF] image CompressionGeorge Toderici, Sean M. O ' Malley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja, Michele Covell, and Rahu L Sukthankar. Variable rate image compression with recurrent neural networks. In Iclr, 2016. (a classic paper on image compression, RNN model) [PDF] George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minne N, Joel Shor, and Michele Covell. Full resolution image compression with recurrent neural networks. ArXiv preprint arxiv:1608.05148, 2016. (The proposed RNN network surpasses JPEG for the first time on the Kodak data set) [PDF] Mohammad Haris Baig, Vladlen Koltun, Lorenzo Tor Resani. Learn to Inpaint for Image Compression. In NIPS, 2017. [PDF]
Feng Jiang, Wen Tao, Shaohui Liu, Jie Ren, Xun Guo, Debin Zhao. An end-to-end Compression Framework Based on convolutional neural Networks. (CNN application in image compression) [PDF]key points/attitude detection
Shih-en Wei, Varun Ramakrishna, Takeo Kanade, Yaser Sheikh. Convolutional Pose machines. CVPR, 2016. (The classic key point detection of the paper, in the 2016 MPII Attitude Analysis Contest ranked second, is my first time to participate in the Tianchi competition in the Fashionai dress key point of the game used in the model) [PDF] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked Hourglass Networks for Human Pose estimation. (Very well-known, features multi-scale, fast, in the 2016 MPII Posture Analysis Contest ranked top, in the Fashionai Tianchi competition is also used by many teams) [PDF] W. Wang, Y. Xu, J. Shen, and S.-c. Zhu, attentive Fashion Grammar Network for Fashion Landmark Detection and clothing Category classification. CVPR, 2018. (the latest masterpiece of Fashionai field, proposed two kinds of positional relation grammar, bidirectional convolution RNN network information transmission model, according to different thought proposed two kinds of attention mechanism, thought very fancy, is worth the first reading.) [PDF]
Reference Link
http://blog.csdn.net/qq_21190081/article/details/69564634 Http://github.com/michuanhaohao/paper/HTTP Github.com/redditsota/state-of-the-art-result-for-machine-learning-problems http://github.com/songrotek/ Deep-learning-papers-reading-roadmap http://github.com/kjw0612/awesome-deep-vision
-------------------------------------------
Youzhi Gu, Master student
Foresight Control Center
College of Control Science & Engineering
Zhejiang University
Email:shaoniangu@163.com,1147071472@qq.com