First conclusion: When using sigmoid as activating function, cross entropy has the characteristics of fast convergence and global optimization compared to quadratic cost function. Using Softmax as the activation function, Log-likelihood as a loss function, there is no drawback of slow convergence.
For the convergence of the loss function, we expect that when the error is greater, the speed of convergence (learning) should be faster. First, quadratic + sigmoid (i), definition
Definitions of squared and loss functions
C= (y−a) c=\frac{(y-a) ^2}{2}
Where Y y is the desired output, a A is the actual output
(ii), convergence characteristics
Unfortunately, the neural units that use the sum of squares as loss functions do not have this property (the reference has a very intuitive example), the specific analysis is as follows:
For a neural unit, the relationship between input x and corresponding output a satisfies
Z=wx+b z=wx+b
A=δ (z) a=\delta (z)
According to the chain law, the corresponding partial derivative can be obtained.
∂c∂w= (δ (z) −y) δ′ (z) x \frac{\partial c}{\partial w}= (\delta (z)-y) \delta ' (z) x
∂c∂b= (δ (z) −y) δ′ (z) \frac{\partial c}{\partial b}= (\delta (z)-y) \delta ' (z)
If the activation function uses the sigmoid function, depending on the shape and properties of the activation function, when δ (z) \delta (z) is approaching 0 or approaching 1, δ′ (z) \delta ' (z) tends to be 0, and when Δ (z) \delta (z) is approaching 0.5, δ′ (z) \delta ' (z) will be the largest.
For example, to take y=0 y=0, when Δ (z) =1 \delta (z) =1, the error δ (z) −y \delta (z)-Y of the expected value and the actual values are maximized, at this point, δ′ (z