"Pytorch" The four-play _ through Lenet pytorch Neural Network _
# author:hellcat# Time:2018/2/11import Torch as Timport Torch.nn as Nnimport torch.nn.functional as Fclass LeNet (NN. Module): def __init__ (self): Super (Lenet,self). __init__ () Self.conv1 = nn. Conv2d (3, 6, 5) Self.conv2 = nn. conv2d (6,16,5) self.fc1 = nn. Linear (16*5*5,120) self.fc2 = nn. Linear (120,84) self.fc3 = nn. Linear (84,10) def forward (self,x): x = f.max_pool2d (F.relu (SELF.CONV1 (x)), (2,2)) x = f.max_pool2d (F.relu ( Self.conv2 (x)), 2) x = X.view (x.size () [0],-1) x = F.relu (SELF.FC1 (x)) x = F.relu (SELF.FC2 (x)) x = SELF.FC3 (x) return xif __name__ = = "__main__": Net = LeNet () # ######## #训练网络 ######### from Torch impor T Optim # initializes the loss function & Optimizer loss_fn = nn. Crossentropyloss () optimizer = Optim. SGD (Net.parameters (), lr=0.001, momentum=0.9) for epoch in range (2): Running_loss = 0.0 for step, data in Enumerate (trainloader, 0): # Step for training times, Trainloader contains BATCH data and label inputs, labels = data inputs, labels = t.autograd.variable (inputs), t.autograd.variable (Labe LS) # gradient clear 0 Optimizer.zero_grad () # Forward outputs = net (inputs) # Backward loss = LOSS_FN (outputs, labels) loss.backward () # Update Optimizer.st EP () Running_loss + = loss.data[0] if step% = = 1999:print ("[{0:d}, {1:5d}] loss : {2:3f} ". Format (epoch+1, step+1, running_loss/2000)) Running_loss = 0. Print ("Finished Training")
This is an example of using the Lenet classification cifar_10, the data processing part is not the focus, not listed, mainly on the use of Torch classification has a visual understanding,
Initializing the network
Initialize the loss function & Optimizer
Enter the step loop:
Gradient zeroing
Forward propagation
Calculate this time loss
Propagate backward
Update parameters
Since the Pytorch network is class, the subsequent processing is not too difficult without considering the persistence, it is worth mentioning that the predictive function, we direct net (Variable (Test_data)) can, the output is the probability distribution Variable, we just call:
_, Predict = T.max (test_out, 1)
This is because Torch.max blends the functions of Max and Argmax,
>> a = TORCH.RANDN (4, 4)
>> A
0.0692 0.3142 1.2513-0.5428
0.9288 0.8552-0.2073 0.6409
1.0695-0.0101-2.4507-1.2230
0.7426-0.7666 0.4862-0.6628
Torch. Floattensor of size 4x4]
>>> Torch.max (A, 1)
(
1.2513
0.9288
1.0695
0.7426
[Torch. Floattensor of size 4]
,
2
0
0
0
[Torch. Longtensor of size 4]
)
Other torch advanced features are not used, and the purpose of this article is to have an understanding of the basic use of torch neural networks.
"Pytorch" The four-play _ through Lenet pytorch Neural Network _