1, keyerror:class ' torch.cuda.ByteTensor '
Solve
About this error on-line introduction is not much, only to find a solution: Bytetensor not working with f.conv2d?. Most of the operations in Pytorch are for Floattensor and doubletensor. 2, Runtimeerror:cudnn_status_bad_param
Solve
The input size is incorrect, and the input size of the convolution layer is (N, C, H, W). 3, Typeerror:max () got an unexpected keyword argument ' Keepdim
The reason is unclear.
Solve
Torch.max (input, Dim) Without Torch.max (input, Dim, Keepdim) 4, Runtimeerror:getcudnndatatype () not supported for B
The Module.forward () method is called, and this error occurs when the conv2d is evaluated.
Solve
The input to the network must be of type float or double or half tensor and must be encapsulated in variable. 5. Cuda Out of Memory
An out -of-memory error occurs after a period of training, which means that the footprint is increasing during training.
Reason
Loss or the output of the network is accumulating, resulting in a continuous expansion of the calculation diagram.
Solve
If you need to use loss during the training cycle, you should use loss.data[0].