This article is a basic learning blog from the University of Paris, PhD Hadrien Jean, which aims to help beginners/Advanced Beginners Master the concept of linear algebra based on deep learning and machine learning. Mastering these skills can improve your ability to understand and apply a variety of data science algorithms.
For beginners, the theoretical underpinnings of deep learning (Ian Goodfellow, Yoshua Bengio, Aaron Courville) may be too simplistic. Based on the linear algebra of the second chapter of the book, the authors introduce the basis of linear algebra in machine learning, and readers can view the basic introduction of each section in the original, Chinese, or Mandarin notes, or refer directly to the derivation of the blog. In addition to the detailed derivation of some concepts, the author adds several examples and gives the implementation code of Python/numpy.
Blog Address: https://hadrienj.github.io/posts/Deep-Learning-Book-Series-Introduction/
GitHub Address: Https://github.com/hadrienj/deepLearningBook-Notes
"Deep Learning" Chinese version: Https://github.com/exacity/deeplearningbook-chinese
Recommend a learning exchange of q-un,719-139-688, like to learn Python friends come together.
"Deep Learning" chapter II catalogue.
Blog directory.
The derivation of the formula of the pure symbol may be too abstract, in the blog The author generally first lists the specific cases, and then gives the symbolic expression.
For example, use a colored array of numbers to explain the basic definition:
The difference between scalar, vector, matrix, tensor.
Symbolic representation:
Then give the Python/numpy sample code:
Build an array with NumPy.
For some operational relationships, the author gives an intuitive and understandable diagram:
The units circle and the ellipse transformed by matrix A, where the vectors are the two eigenvectors of a.
For some more complex objects, the author also gives the function visualization and interactive interface. For example, in the two-magnitude transformation problem of eigenvalue decomposition, the two-function
The visualization of its positive, negative, and unshaped stereotypes:
The interactive interface of the positive stereotypes function:
The last section of PCA (principal component analysis) is a comprehensive application of the concepts previously introduced, which readers can use as autonomous exercises.
PCA as a coordinate system transformation problem.
The eigenvector of the covariance matrix.
Rotates the data to get the maximum variance on one axis.
I wish you all a happy study!
Article reproduced: The Heart of the machine
Resources | Learn the basics of linear algebra in deep learning with Python and numpy