1 vectorization (vectorization)
In logistic regression, for example, z = w transpose and X for internal product and B, you can use a for loop to implement.
But in Python z can call NumPy's method, the direct sentence z = Np.dot (w, x) + B is done with vectorization, and you will find this very fast.
Ng made an experiment to find the inner product of two 1 million-long one-dimensional vectors, which took 1.5 milliseconds to quantify, and spent more than 400 milliseconds with a for-loop calculation.
So always remember to use vectorization, be sure to avoid using for loops, your code will be much faster.
Both the CPU and GPU have parallelized instructions, sometimes called SIMD (single instruction multiple data).
If you use such built-in functions, such as Np.function,python's NumPy, you can take advantage of parallelization to get faster computations.
2 more vectorization examples (more vectorization Examples)
Avoid using a For loop to make good use of the built-in functions in the Python numpy library.
For example, the inner product of matrix A and vector v can be used Np.dot. The exponential operation of a column of vector v can be performed using Np.exp, and various np.log,np.abs,np.maxmum (V, 0) and so on.
For operations such as V * * 2,1/v, a function in NP is also considered.
3 vectorization Logistic regression (vectorizing logistic Regression)
Derivative calculations for logistic regression should also be used for vectorization, without a for loop at all. The process of vectorization is given in the figure.
Z's computational vectorization form is Np.dot (w.t, X) + B, where B is a real number, and Python automatically adds the real number to a vector of the same dimension when the sum of the vectors and the real numbers.
Where w is a column vector of n * 1, W.T is a column vector of 1 * n, X is a matrix of n * m, the result is a vector of 1 * m, finally a b vector of 1 * m, and a Z of 1 * m is obtained. Finally, the predicted value of a is obtained by sigmoid.
It is also possible to calculate the gradient of M data using vectorization, which is calculated simultaneously. On the left is the implementation of the For loop, and the right is the vectorization implementation.
Here dz is the derivative of the cost function to the z-variable, previously deduced to be equal to the predicted value minus the actual value of a-y.
DW is the derivative of the cost function to W, and the DB is the derivative of the cost function to B, and if you do not remember the contents of the logical regression that can be viewed in a class.
Although it is necessary to use vectorization as much as possible, it is inevitable that iterations with multiple gradients will be used for the For loop.
4 broadcasts in Python (Python broadcasting)
When you add a number to a vector, Python automatically adds the number to the vector and then one by one.
When you use a m*n matrix plus (minus multiplication) on the 1*n vector, Python automatically copies the 1*n vector vertically into m*n and adds it.
When you add a vector of m*1 to a m*n matrix, Python automatically copies the m*1 vector horizontally into m*n and adds it.
This is the main use of the network to implement the broadcast, in more detail, you can view the NumPy document search broadcasting.
Some of the usages in numpy need to be understood to help you use matrix operations more efficiently to improve program efficiency, and NG also gives examples of percentages in this section.
The a.sum (axis = 0) represents the vertical sum, if axis = 1 is the horizontal sum.
5 vector description in python/numpy (A Note on python/numpy vectors)
NumPy and broadcasts allow us to perform many operations in a single line of code.
But sometimes it may introduce very subtle errors, very strange bugs, if you are not familiar with all the complex ways of broadcasting.
For example, you think that a row vector and a column vector should be added to the error, but it doesn't, and it's not a simple one by one addition.
Python's strange effects have their intrinsic logic, and if you're unfamiliar with Python, you might write strange bugs that are difficult to debug.
Ng suggests that you do not use a variable such as shape (n,) when implementing a neural network (n,1).
For example, the shape of A is (5), when you calculate Np.dot (A, A.T), you get a real number, a and a transpose, and their shape is (5).
If the shape of a is (5, 1), you get a 5*5 matrix when you calculate Np.dot (A, A.T). The shape of A is (5, 1), and a. The shape of T is (1, 5).
A.shape = (5,) This is an array of rank 1, not a row vector or a column vector. Many students appear to be difficult to debug bugs are from the rank of 1 arrays.
In addition, if you do a lot of things in the code, you may not remember or are unsure of how a is, use assert (A.shape = = (5,1)) to check the dimensions of your matrix.
If you get (5,) you can reshape it into (5, 1) or (1, 5), reshape is very fast O (1) complexity, so be assured of daring to use it without worrying.
Wunda-Deep Learning-course notes -3:python and Vectorization (Week 2)