Stanford cs231n Job Code (Chinese) Assignment 1-q4

Source: Internet
Author: User
cs231n-assignment 1-q4-two-layer Neural Network
Written: Guo Chengkun concept of Fanli slyned
proofreading: Maoli
He hui to and audit: cold Small Yang
1 Quests

In this exercise, we will implement a fully connected neural network classifier and then test it with the CIFAR-10 dataset. 2 Knowledge points 2.1 Neurons

To talk about neural networks, you have to start with neurons, neurons are the basic unit of neural networks, and their structure is shown in the following diagram:

Neurons, regardless of their name, are essentially just a function. Just as each function has input and output, the input of the neuron is a vector, which contains a number of Xi X I x_i (left side of the graph), and the output is a value (right side of the graph). Neurons do only two things in this process: one is to do an affine transformation of the input vector (linear change + translation), expressed in the formula is ∑iwixi+b (1) (1) ∑i w i x i + B \sum_iw_ix_i +b Two is the result of affine transformation to do a nonlinear transformation, that is f (∑iwi XI+B) (2) (2) F (∑i w i x i + b) f (\sum_iw_ix_i +b) Here the f is called the activation function. Weight WI w i w_i and bias b b b is a neuron-specific attribute, in fact, the difference between different neurons is mainly three points: weight value, offset item, activation function. In general, the work of neurons is to multiply all the input XI X i x_i with their corresponding weights, then add the offset b b b, then the final value by an activation function, that is, the output of the neuron.

There are many kinds of activation functions, the following is an activation function called sigmoid: σ (x) =1/(1+e−x) (3) (3) σ (x) = 1/(1 + e−x) \sigma (x) = 1/(1+e^{-x})
This function is interesting in that it converts any real number that belongs to the value of R to 0 to 1, which makes the interpretation of probabilities possible. However, when training the neural network, it has two problems, one is that when the input affine transformation value is too large or too small, the f about the X gradient value is 0, which is not conducive to the later reference of the inverse propagation algorithm gradient calculation. The second problem is that its value is not centrally symmetric, which also leads to instability in the gradient update process. The function image of sigmoid is about the length of the following:

If we now put the real number 1 into the input vector to get x⃗x→\vec{x}, put the bias into the weight vector to get

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.