Mseloss loss function is called in Chinese. The formula is as follows:
Here, the loss, X, and y dimensions are the same. They can be vectors or matrices, and I is a subscript.
Many loss functions have two Boolean parameters: size_average and reduce. Generally, the loss function directly calculates the batch data. Therefore, the returned loss result is a vector with the dimension (batch_size.
The general format is as follows:
loss_fn = torch.nn.MSELoss(reduce=True, size_average=True)
Note the following two input parameters:
A reduce = false, return the loss in Vector Form
B reduce = true, return the loss in the scalar form
C size_average = true, return loss. Mean ();
D if size_average = false, loss. sum () is returned ()
By default, both parameters are true.
The following is an example of Python:
1 # -*- coding: utf-8 -*- 2 3 import torch 4 import torch.optim as optim 5 6 loss_fn = torch.nn.MSELoss(reduce=False, size_average=False) 7 #loss_fn = torch.nn.MSELoss(reduce=True, size_average=True) 8 #loss_fn = torch.nn.MSELoss() 9 input = torch.autograd.Variable(torch.randn(3,4))10 target = torch.autograd.Variable(torch.randn(3,4))11 loss = loss_fn(input, target)12 print(input); print(target); print(loss)13 print(input.size(), target.size(), loss.size())
The result can be run by yourself.
Reference:
1 81029791
2 72464152? Utm_source = itdadao & utm_medium = referral
Neural Network Architecture pytorch-mseloss loss function