Caffe Layer Layer Detailed

Source: Internet
Author: User
1, the basic layer definition, Parameters

1, the basic layer definition, parameters

How to use Caffe to define a network, first of all to understand the basic interface in Caffe, the following five types of layer are introduced

Vision Layers

The visual layer comes from header file header:./include/caffe/vision_layers.hpp the general input and output are images, which concern the 2-D geometry of the image and deal with the input according to this structure, in particular, Most of the visualization layer outputs the related region by manipulating some regions, whereas the other layer ignores the binding structure, and only treats the input as a one-dimensional, large-scale vector.
Convolution:
Convolution

Layer type:convolution
CPU implementation:./src/caffe/layers/convolution_layer.cpp

CUDA GPU implementation:./src/caffe/layers/convolution_layer.cu
Parameters (Convolutionparameter Convolution_param)
Required:

num_output (c_o): Number of filters//convolution kernel_size (or Kernel_h and Kernel_w): Specifies Height and width of each filter//size strongly recommended Weight_filler [default type: ' Constant ' value:0] Option Al Bias_term [Default true]: Specifies whether to learn and apply a set of additive biases to the filter outputs//offset PA D (or Pad_h and pad_w) [default 0]: Specifies the number of pixels to (implicitly) add to each side of the the input//pad is on the input Image expansion, edge increase size stride (or Stride_h and stride_w) [Default 1]: Specifies the intervals at which to apply the filters to the Input//definition reference convolution interval Group (g) [Default 1]: If g > 1, we restrict the connectivity of each filter to a subset of the input . Specifically, the input and output channels are separated into G groups, and the ith output group channels'll is only co
nnected to the ith input group channels. The input channel is divided into the G group, the output and the input are consistent, and the first I output channel is connected with the first input channel only. 

Each filter produces a featuremap.
Input size: N∗ci (channel) ∗hi (height) ∗wi (weight) n*c_i (channel) *h_i (height) *w_i (weight)
Size of output:
n∗co∗ho∗wo,whereho= (HI+2∗PADH−KERNELH)/strideh+1andwo likewise. n * c_o * h_o * w_o, where h_o = (h_i + 2 * pad_h-kernel_h)/Stride_h + 1 and w_o\ likewise.

Pooling:
The function of the pool layer is to compress the dimension of the feature, and to turn the adjacent area into a value. The current types include: maximizing, averaging, random
Parameters are:
The size of the Kernel_size,filter
Pool: Type
Pad: The size of the increased bounds of each input image
The size between stride:filter
Input size:
N∗C∗HI∗WI n * c * h_i * w_i
Output size:
N∗c∗ho∗wo n * c * h_o * w_o, where h_o and W_o are computed in the same as way.

Local Response Normalization (LRN):
Layer TYPE:LRN
CPU implementation:./src/caffe/layers/lrn_layer.cpp
CUDA GPU implementation:./src/caffe/layers/lrn_layer.cu
Parameters (Lrnparameter Lrn_param)
Optional
Local_size [Default 5]: The number of channels to sum over (for cross channel LRN) or the side length of the square region To sum over (for within channel LRN)
Alpha [Default 1]: The scaling parameter (click below)
Beta [Default 5]: the exponent (below)
norm_region [Default Across_channels]: whether to sum over adjacent channels (across_channels) or nearby spatial locaitons (Within_channel)
The local response normalization layer performs a kind of ' lateral inhibition ' by normalizing over the local input regions. In Across_channels mode, the local regions extend across nearby channels, but have no spatial extent (i.e., they have e local_size x 1 x 1). In Within_channel mode, the local regions extend spatially, but are in separate channels (i.e., they have shape 1 x local_ Size x local_size). Each input value was divided by (1+ (α/n) ∑ix2i) Beta (1+ (\alpha/n) \sum_ix_i^2) ^{\beta}, where

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.