1. Visualizing Higher-layer features of a deep network
Two kinds of visualization methods are presented in this paper.
1. Maximizing activation
When a deep neural network is trained, all parameters are fixed. The activation of a neuron is then optimized for the gradient to find the input that maximizes its value. Constant use of gradient ascent to update an input initialized to random value. The input after converge is the one that maximizes the neuron's activation, which is the feature that the neuron learns.
2. Sampling from a unit of a deep belief network
Set the activation of a neuron to 1 and then generate some corresponding samples to estimate a distribution by these samples.
3. Linear combination of previous layers ' filters
This is an already existing technique, using the lower filter linear combination to Visulze the upper filter
Conclusion
1. Different network structures or models will learn different filter.
2. A good model-learned filter is usually easier to interpret, but this is not 100% applicable. Some of the model's feature look bad, but the model may also be good.
3. Deep model feature is relatively high and can be a combination of the underlying feature.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Visualization of Neural Network features