Neural Networks
We will use the following disince to denote a single neuron:
This "neuron" is a computational unit that takes as inputX1,X2,X3 (and a + 1 intercept term), and outputs, where is calledActivation function. In these notes, we will choose to be the sigmoid function:
Thus, our single neuron corresponds exactly to the input-output mapping definedLogistic Regression.
Although these notes will use the sigmoid function, it is worth noting that another common choiceFIs the hyperbolic tangent, or Tanh, function:
Here are plots of the sigmoid and tanh functions:
Finally, one identity that'll be useful later: IfF(Z) = 1/(1 + exp (−Z) Is the sigmoid function, then its derivative is givenF'(Z) =F(Z) (1−F(Z))
The sigmoid function or tanh function can be used to complete nonlinear ing.
Neural Network Model
A neural network is put together by hooking together records of our simple "neurons," so that the output of a neuron can be the input of another. for example, here is a small Neural Network:
In this figure, we have used circles to also denote the inputs to the network. The circles labeled "+ 1" are calledBias units, And correspond to the intercept term. The leftmost layer of the network is calledInput layer, And the rightmost layerOutput Layer(Which, in this example, has only one node). The middle layer of nodes is calledHidden Layer, Because its values are not observed in the training set. We also say that our example neural network has 3Input units(Not counting the bias unit), 3Hidden units, And 1Output Unit.
Our neural network has parameters (W,B) = (W(1 ),B(1 ),W(2 ),B(2), where we write to denote the parameter (or weight) associated with the connection between UnitJIn LayerL, And unitIIn LayerL+ 1. (note the order of the indices.) also, is the bias associated with UnitIIn LayerL+ 1.
We will write to denoteActivation(Meaning output value) of UnitIIn LayerL.L= 1, we also use to denoteI-Th input. Given a fixed setting of the ParametersW,B, Our neural network defines a hypothesisHW,B(X) That outputs a real number. Specifically, the computation that this neural network represents is given:
Each layer is a linear combination + nonlinear ing.
In the sequel, we also let denoteThe total weighted sum of inputs to UnitIIn LayerL, Including the bias term(E.g.,), so that.
Note that this easily lends itself to a more compact notation. Specifically, if we extend the activation function to apply to vectors in an element-wise fashion (I. e .,F([Z1,Z2,Z3]) = [F(Z1 ),F(Z2 ),F(Z3)]), then we can write the equations above more compactly:
We call this stepForward propagation.
Backpropagation Algorithm
For a single training example (X,Y), We define the cost function with respect to that single example to be:
This is a (one-half) squared-error cost function. Given a training setMExamples, we then define the overall cost function to be:
J(W,B;X,Y) Is the squared error cost with respectA single example;J(W,B) IsOverall Cost Function, Which has des the weight decay term.
Our goal is to minimizeJ(W,B) As a functionWAndB. To train our neural network, we will initialize each parameter and eachA small random value near zero(Say according toNORMAL(0, ε 2) Distribution for some small ε, say 0.01), and then apply an optimization algorithm such as batch gradient descent. finally, note that it is important to initialize the parameters randomly, rather than to all 0's. if all the parameters start off at identical values, then all the hidden layer units will end up learning the same function of the input (more formally, will be the same for all valuesI, So that for any inputX). The random initialization serves the purposeRy ry breaking.
One iteration of gradient descent updates the parametersW,BAs follows:
The two lines above differ slightly because weight decay is appliedWBut notB.
The intuition behind the backpropagation algorithm is as follows. Given a training example (X,Y), We will first run a "Forward pass" to compute all the activations throughout the network, including the output value of the hypothesisHW,B(X). Then, for each nodeIIn LayerL, We wowould like to compute an "error term" that measures how much that node was "responsible" for any errors in our output.
For an output Node, We can directly measure the difference between the Network's activation and the true target value, and use that to define (where LayerNLIs the output layer ).For hidden units, We will compute based on a weighted average of the error terms of the nodes that uses as an input. In detail, here is the Backpropagation Algorithm:
- 1, perform a feedforward pass, computing the activations for layersL2,L3, and so on up to the output layer.
2, for each output unitIIn LayerNL(The output layer), set
For
For each nodeIIn LayerL, Set
4, compute the desired partial derivatives, which are given:
We will use "" to denote the element-Wise product operator (denoted".*"In MATLAB or octave, and also called the Hadamard product), so that if, then. similar to how we extended the definition of to apply element-wise to vectors, we also do the same for (so that ).
The algorithm can then be written:
- 1, perform a feedforward pass, computing the activations for layers, up to the output layer, using the equations defining the Forward Propagation steps
2, for the output layer (layer), set
-
3,
-
Set
-
4, compute the desired partial derivatives:
-
Implementation note:In steps 2 and 3 above, we need to compute for each value. assuming is the sigmoid activation function, we wocould already have stored away from the forward pass through the network. thus, using the expression that we worked out earlier for, we can compute this.
Finally, we are ready to describe the full gradient descent algorithm. in the pseudo-code below, is a matrix (of the same dimension as), and is a vector (of the same dimension ). note that in this notation, "" is a matrix, and in particle It isn' t "times. "We implement one iteration of batch gradient descent as follows:
- 1, set, (matrix/vector of zeros) for all.
- 2, for,
- Use Backpropagation to compute and.
- Set.
- Set.
- 3. Update the parameters:
Sparse autoencoder (1)