Neural Network Lecture Video
What are the neuronts?
Storing numbers, returning function values for functions
How are they connected?
a1+ a2+ a3+ A4 +......+ An represents the activation value of the first level
Ω1ω2 ..... Ω7ω8 represents the weight value
Calculates the weighted sum, marks the positive weight value as green, the negative weight value is marked red, the darker the color, the closer the representation is to 0.
This assigns a positive value to the area weights we focus on, and all other weights are assigned to 0, which will only accumulate pixel values for our area of interest. If you want to confirm whether there is a line, you can assign the weight value around the line to a negative number, so that the middle of the Shansuliang, the surrounding pixels dark, the weighted sum can be the maximum.
But this calculated weighting is arbitrary, requiring a function to compress to 0-1 of the range, the Sigmoid function feature is particularly small X function values close to 0, the special X function value of nearly 1,
Sometimes, weighted and greater than 0 o'clock, you do not want to light this neuron, you can set an offset value to adjust. (Weights tell you what the second-tier neurons focus on what pixel graphics, two-offset values Gansu you are weighted worthy of the magnitude of neurons that excite meaningful)
Expressed in a more concise formula:
This makes programming easier, using out-of-the-box optimized matrix functions NumPy
Leave a question. (Answer later)
sigmoid function and Relu function for neuron processing