In the activation layer, an activation of the input data (actually a function transformation) is performed on a per-element basis. A BLOB data entry is obtained from bottom, and a BLOB data is entered from top after the operation. During the operation, there is no change in the size of the data, that is, the data size of the input and output is equal.
Input: N*c*h*w
Output: N*c*h*w
The commonly used activation functions are sigmoid, tanh,relu and so on, respectively.
1, Sigmoid
For each input data, use the sigmoid function to perform the operation. This layer setting is relatively simple and has no additional parameters.
Layer Type: Sigmoid
Example:
Layer {
name: "Encode1neuron"
Bottom: "encode1"
Top: "Encode1neuron"
type: "Sigmoid"
}
2, Relu/rectified-linear and Leaky-relu
Relu is currently the most used activation function, mainly because it converges faster and can maintain the same effect.
The standard Relu function is max (x, 0), and when x>0, output x; When x<=0, output 0
F (x) =max (x,0)
Layer Type: ReLU
Optional Parameters:
Negative_slope: Default is 0. The standard Relu function is changed, if the value is set, then the data is negative, it is no longer set to 0, but the original data multiplied by Negative_slope
Layer {
name: "RELU1"
type: "ReLU"
Bottom: "pool1"
Top: "Pool1"
}
The Relu layer supports in-place calculations, which means that the output and input of the bottom are the same to avoid memory consumption.
3, Tanh/hyperbolic Tangent
The hyperbolic tangent function is used to transform the data.
Layer Type: TanH
Layer {
name: ' Layer '
bottom: ' In '
top: ' Out '
type: ' TanH '
}
4. Absolute Value
The absolute value of each input data.
F (x) =abs (x)
Layer Type: Absval
Layer {
name: ' Layer '
bottom: ' In '
top: ' Out '
type: ' Absval '
}
5. Power
To power each input data
f (x) = (SHIFT + scale * x) ^ power
Layer Type: Power
Optional Parameters:
Power: Default is 1
Scale: defaults to 1
Shift: Default is 0
Layer {
name: ' Layer '
bottom: ' In '
top: ' Out '
type: ' Power '
power_param {
power:2 scale
: 1
shift:0
}
}
6, BNLL
Binomial normal log likelihood abbreviation
F (x) =log (1 + exp (x))
Layer Type: BNLL
Layer {
name: ' Layer '
bottom: ' In '
top: ' Out '
type: ' BNLL '
}