(original) The Apply function of torch

Source: Internet
Author: User
Tags lua

Reprint please specify the source:

Http://www.cnblogs.com/darkknightzh/p/6221633.html

The Apply function in torch passes through each module of the model. In fact, it uses a depth-first algorithm.

The specific code is shown below (see Torch/install/share/lua/5.1/nn/module.luafor code):

 --  Run a callback (called with the module as an Argument) in preorder over this  --  module and its children.  -- function   Module:apply (callback) callback (self)  if  self.modules then  for  _, module  in  ipairs  (self.modules) do  module  :apply (callback)  end  
   
    end  
    end  
   

As can be seen, apply recursively calls itself until the module is not present (it is not reasonable to say so).

The test code is as follows:

require "DPNN"functionCreatemodel ()LocalNET =nn. Sequential () Net:add (NN. SPATIALCONVOLUTIONMM (3, -,7,7,2,2,3,3) ) Net:add (NN. Spatialbatchnormalization ( -) ) Net:add (NN. ReLU ()) Net:add (NN. Spatialmaxpooling (3,3,2,2,1,1) ) Net:add (NN. inception{inputsize=192, Kernelsize= {3,5}, Kernelstride= {1,1}, Outputsize= { -, +}, Reducesize= { the, -, +, -}, Pool= nn. Spatialmaxpooling (3,3,1,1,1,1), Batchnorm=true}) Net:add (NN. inception{inputsize= the, Kernelsize= {3,5}, Kernelstride= {1,1}, Outputsize= { -, -}, Reducesize= { the, +, -, -}, Pool= nn. Spatiallppooling ( the,2,3,3,1,1), Batchnorm=false}) Net:add (NN. Spatialaveragepooling (7,7) ) Net:add (NN. View ( the) ) Net:add (NN. Linear ( the, -) ) Net:add (NN. Normalize (2))   returnNetEndTorch.setdefaulttensortype ('Torch. Floattensor')LocalModel =Createmodel ()--print (model)tt =0model:apply (function(Module) TT= TT +1    Print(TT,Module)End)

The output is:

1nn. sequential {[Input--(1)--(2), (3), (4), (5), (6), (7), (8), (9)--(10) , output] (1): NN. SPATIALCONVOLUTIONMM (3, 7x7, 2,2, 3,3) (2): NN. Spatialbatchnormalization (3): NN. ReLU (4): NN. Spatialmaxpooling (3x3, 2,2, and a) (5): NN. Inception @ NN. Depthconcat {input | ', (1): NN.      Sequential {|      [Input, (1), (2), (3), (4), (5)--(6)--Output] | (1): NN.      Spatialconvolution (192, 1x1) | (2): NN.      spatialbatchnormalization | (3): NN.      ReLU | (4): NN.      Spatialconvolution (1, 1, and a) | (5): NN.      spatialbatchnormalization | (6): NN.    ReLU | } | ', (2): NN.      Sequential {|      [Input, (1), (2), (3), (4), (5)--(6)--Output] | (1): NN.      Spatialconvolution (192, 1x1) | (2): NN.      spatialbatchnormalization | (3): NN.   ReLU   | (4): NN.      Spatialconvolution, 1, 1, 5x5, 2,2 | (5): NN.      spatialbatchnormalization | (6): NN.    ReLU | } | ', (3): NN.      Sequential {|      [Input--(1), (2), (3)--(4), Output] | (1): NN.      Spatialmaxpooling (3x3, 1, 1, #) | (2): NN.      Spatialconvolution (192, 1x1) | (3): NN.      spatialbatchnormalization | (4): NN.    ReLU | } | ', (4): NN. sequential {[Input--(1), (2)--(3), output] (1): NN. Spatialconvolution (192, 1x1) (2): NN. Spatialbatchnormalization (3): NN. ReLU}. Output} (6): NN. Inception @ NN. Depthconcat {input | ', (1): NN.      Sequential {|      [Input--(1), (2), (3)--(4), Output] | (1): NN.      Spatialconvolution, 1x1 | (2): NN.      ReLU | (3): NN. Spatialconvolution (128, 3x3, 1, 1, #) | (4): NN.    ReLU | } | ', (2): NN.      Sequential {|      [Input--(1), (2), (3)--(4), Output] | (1): NN.      Spatialconvolution (+-A, 1x1) | (2): NN.      ReLU | (3): NN.      Spatialconvolution (1, 1, 2,2) | (4): NN.    ReLU | } | ', (3): NN.      Sequential {|      [Input--(1), (2)--(3), Output] | (1): NN.        Sequential {|        [Input--(1), (2), (3)--(4), Output] | (1): NN.        Square | (2): NN.        Spatialaveragepooling (3x3, #) | (3): NN.        Mulconstant | (4): NN.      Sqrt |      }      | (2): NN.      Spatialconvolution, 1x1 | (3): NN.    ReLU | } | ', (4): NN. sequential {[Input-, (1)-(2)-output] (1): NN. Spatialconvolution (2): NN. ReLU} ...-> Output} (7): NN. Spatialaveragepooling (7x7, a) (8): NN. View (9): NN. Linear (Ten): NN. Normalize (2)}2nn. SPATIALCONVOLUTIONMM (3, 7x7, 2,2, 3,3) 3nn. SpatialBatchNormalization4nn.ReLU5nn.SpatialMaxPooling (3x3, 2,2, 6NN). Inception @ NN. Depthconcat {input | ', (1): NN.      Sequential {|      [Input, (1), (2), (3), (4), (5)--(6)--Output] | (1): NN.      Spatialconvolution (192, 1x1) | (2): NN.      spatialbatchnormalization | (3): NN.      ReLU | (4): NN.      Spatialconvolution (1, 1, and a) | (5): NN.      spatialbatchnormalization | (6): NN.    ReLU | } | ', (2): NN.      Sequential {|      [Input, (1), (2), (3), (4), (5)--(6)--Output] | (1): NN.      Spatialconvolution (192, 1x1) | (2): NN.      spatialbatchnormalization | (3): NN.      ReLU | (4): NN.   Spatialconvolution, 1, 1, 5x5, 2,2 |   (5): NN.      spatialbatchnormalization | (6): NN.    ReLU | } | ', (3): NN.      Sequential {|      [Input--(1), (2), (3)--(4), Output] | (1): NN.      Spatialmaxpooling (3x3, 1, 1, #) | (2): NN.      Spatialconvolution (192, 1x1) | (3): NN.      spatialbatchnormalization | (4): NN.    ReLU | } | ', (4): NN. sequential {[Input--(1), (2)--(3), output] (1): NN. Spatialconvolution (192, 1x1) (2): NN. Spatialbatchnormalization (3): NN. ReLU} ...-output}7nn. Depthconcat {input | ', (1): NN.      Sequential {|      [Input, (1), (2), (3), (4), (5)--(6)--Output] | (1): NN.      Spatialconvolution (192, 1x1) | (2): NN.      spatialbatchnormalization | (3): NN.      ReLU | (4): NN.      Spatialconvolution (1, 1, and a) | (5): NN.      spatialbatchnormalization | (6):Nn.    ReLU | } | ', (2): NN.      Sequential {|      [Input, (1), (2), (3), (4), (5)--(6)--Output] | (1): NN.      Spatialconvolution (192, 1x1) | (2): NN.      spatialbatchnormalization | (3): NN.      ReLU | (4): NN.      Spatialconvolution, 1, 1, 5x5, 2,2 | (5): NN.      spatialbatchnormalization | (6): NN.    ReLU | } | ', (3): NN.      Sequential {|      [Input--(1), (2), (3)--(4), Output] | (1): NN.      Spatialmaxpooling (3x3, 1, 1, #) | (2): NN.      Spatialconvolution (192, 1x1) | (3): NN.      spatialbatchnormalization | (4): NN.    ReLU | } | ', (4): NN. sequential {[Input--(1), (2)--(3), output] (1): NN. Spatialconvolution (192, 1x1) (2): NN. Spatialbatchnormalization (3): NN. ReLU} ...-output}8nn. sequential {[Input--(1), (2)--(3) (4)---(5)--(6)--output] (1): NN. Spatialconvolution (192, 1x1) (2): NN. Spatialbatchnormalization (3): NN. ReLU (4): NN. Spatialconvolution (5), 3x3, (+) (+): NN. Spatialbatchnormalization (6): NN. Relu}9nn. Spatialconvolution (192, 1x1) 10nn. SpatialBatchNormalization11nn.ReLU12nn.SpatialConvolution (------, 3x3, 13nn). SpatialBatchNormalization14nn.ReLU15nn.Sequential {[Input--(1)--(2), (3), (4), (5)--(6)-& Gt Output] (1): NN. Spatialconvolution (192, 1x1) (2): NN. Spatialbatchnormalization (3): NN. ReLU (4): NN. Spatialconvolution (+, 5x5, 2,2) (5): NN. Spatialbatchnormalization (6): NN. Relu}16nn. Spatialconvolution (192, 1x1) 17nn. SpatialBatchNormalization18nn.ReLU19nn.SpatialConvolution (2,2, 5x5, 20nn). SpatialBatchNormalization21nn.ReLU22nn.Sequential {[Input--(1), (2), (3), (4), output] (1): nn . Spatialmaxpooling (3x3, 1, 1, (2): NN. Spatialconvolution (192, 1x1) (3): NN. Spatialbatchnormalization (4): NN. Relu}23nn. Spatialmaxpooling (3x3, 24nn). Spatialconvolution (192, 1x1) 25nn. SpatialBatchNormalization26nn.ReLU27nn.Sequential {[Input--(1), (2)--(3), output] (1): NN. Spatialconvolution (192, 1x1) (2): NN. Spatialbatchnormalization (3): NN. Relu}28nn. Spatialconvolution (192, 1x1) 29nn. SpatialBatchNormalization30nn.ReLU31nn.Inception @ NN. Depthconcat {input | ', (1): NN.      Sequential {|      [Input--(1), (2), (3)--(4), Output] | (1): NN.      Spatialconvolution, 1x1 | (2): NN.      ReLU | (3): NN.      Spatialconvolution (1, 1, and a) | (4): NN.    ReLU | } | ', (2): NN.      Sequential {|      [Input--(1), (2), (3)--(4), Output] | (1): NN.      Spatialconvolution (+-A, 1x1) | (2): NN.      ReLU | (3): NN. SpatialconvOlution (1, 1, 2,2) | (4): NN.    ReLU | } | ', (3): NN.      Sequential {|      [Input--(1), (2)--(3), Output] | (1): NN.        Sequential {|        [Input--(1), (2), (3)--(4), Output] | (1): NN.        Square | (2): NN.        Spatialaveragepooling (3x3, #) | (3): NN.        Mulconstant | (4): NN.      Sqrt |      }    | (2): NN.      Spatialconvolution, 1x1 | (3): NN.    ReLU | } | ', (4): NN. sequential {[Input-, (1)-(2)-output] (1): NN. Spatialconvolution (2): NN. ReLU} ...-output}32nn. Depthconcat {input | ', (1): NN.      Sequential {|      [Input--(1), (2), (3)--(4), Output] | (1): NN.      Spatialconvolution, 1x1 | (2): NN.      ReLU | (3): NN.      Spatialconvolution (1, 1, and a) | (4): NN.   ReLU | } | ', (2): NN.      Sequential {|      [Input--(1), (2), (3)--(4), Output] | (1): NN.      Spatialconvolution (+-A, 1x1) | (2): NN.      ReLU | (3): NN.      Spatialconvolution (1, 1, 2,2) | (4): NN.    ReLU | } | ', (3): NN.      Sequential {|      [Input--(1), (2)--(3), Output] | (1): NN.        Sequential {|        [Input--(1), (2), (3)--(4), Output] | (1): NN.        Square | (2): NN.        Spatialaveragepooling (3x3, #) | (3): NN.        Mulconstant | (4): NN.      Sqrt |      }    | (2): NN.      Spatialconvolution, 1x1 | (3): NN.    ReLU | } | ', (4): NN. sequential {[Input-, (1)-(2)-output] (1): NN. Spatialconvolution (2): NN. ReLU} ...-output}33nn. sequential {[Input--(1), (2), (3), (4), output] (1): NN. SPatialconvolution (2): NN. ReLU (3): NN. Spatialconvolution (4), 3x3, (+) (+): NN. Relu}34nn. Spatialconvolution (35nn, 1x1). Relu36nn.spatialconvolution (------, 3x3, 37NN). relu38nn.sequential {[Input--(1), (2), (3), (4), output] (1): NN. Spatialconvolution (2): NN. ReLU (3): NN. Spatialconvolution (+, 5x5, 2,2) (4): NN. Relu}39nn. Spatialconvolution (40nn, 1x1). Relu41nn.spatialconvolution (42nn, 5x5, 2,2). relu43nn.sequential {[Input--(1), (2)--(3), output] (1): NN. sequential {[Input--(1), (2), (3), (4), output] (1): NN. Square (2): NN. Spatialaveragepooling (3x3, 3): NN. Mulconstant (4): NN. SQRT} (2): NN. Spatialconvolution (3): NN. Relu}44nn. sequential {[Input--(1), (2), (3), (4), output] (1): NN. Square (2): NN. SpatialaveragepooliNg (3x3, 3): NN. Mulconstant (4): NN. Sqrt}45nn. Square46nn.spatialaveragepooling (3x3, 47NN). MulConstant48nn.Sqrt49nn.SpatialConvolution (50nn, 1x1). relu51nn.sequential {[Input-, (1)-(2)-output] (1): NN. Spatialconvolution (2): NN. Relu}52nn. Spatialconvolution (53nn, 1x1). Relu54nn.spatialaveragepooling (7x7, 55NN). View (56NN). Linear (57NN). Normalize (2)
View Code

As can be seen from the above results, after using apply, the 1th output of the entire model, here is the topmost layer.

第2-5次 output:

2 nn. SPATIALCONVOLUTIONMM (3, 7x7, 2, 2, 3,3)

3 nn. Spatialbatchnormalization

4 nn. ReLU

5 nn. Spatialmaxpooling (3x3, 2, 2,.)

Several layers prior to the inception.

The 6th time is nn. Inception @ NN. Depthconcat, the 7th time for NN. Depthconcat. Here is the first inceptioin layer.

The 8th time is the first nn.sequential of inception, 第9-14次 is the specific layer of the layer. At this point, it's the first bottom.

The 15th time is inception's second nn.sequential, 第16-21次 is the specific layer of the layer. This is the second lowest level.

The 22nd time is inception's third nn.sequential, 第23-26次 is the specific layer of the layer. At this point the third is the bottom.

The 27th time is the fourth nn.sequential of inception, 第28-30次 is the specific layer of the layer. At this point the fourth is at the bottom of the story.

At this point, the first inception layer is traversed through a depth-first approach.

The 31st time is NN. Inception @ NN. Depthconcat, the 32nd time for NN. Depthconcat. Here is the second inceptioin layer (note that in order to differentiate between the first inception and the second inception layer, the concrete structure of the two layers is not exactly the same).

The 33rd time is the first nn.sequential of inception, 第34-37次 is the specific layer of the layer. At this point, it's the first bottom.

The 38th time is inception's second nn.sequential, 第39-42次 is the specific layer of the layer. This is the second lowest level.

The 43rd time is inception's third nn.sequential.

The 44th time is a third nn. Sequential's first small module (also a nn.sequential). The 第45-48 traverses this nn in turn. Sequential. At the bottom, the traversal is complete.

The 第49-50 is a third nn. The last two floors of the sequential.

The 51st time is the fourth nn.sequential of inception, 第52-53次 is the specific layer of the layer. At this point the fourth is at the bottom of the story.

At this point, the second inception layer is traversed through a depth-first approach.

The 第54-57 is the last two layers.

As can be seen above, apply takes a depth-first approach to traverse.

(original) The Apply function of torch

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.