1. When using shared memory, if stated
__shared__ myshared;
You do not need to indicate the size of shared when using the kernel function
If you use
extern __shared__ myshared;
When you need to use kernel again << <> to indicate the size of the sharedmemory used.
2. No space is requested for the asserted device variable
When you run the Cuda code again, if you do not use the error-checking function for memory that is not used in the GPU
Cudamalloc allocates storage space, the code can be compiled through, and can run, success, but actually not
The kernel function is initiated, and the result is that we are wrong when we see the result, but we don't know where the error is.
So it was found that the output that was not expected after the kernel run was first checked for the wrong device ' inside the vinegar is assigned
Or the result of the output
3.
__syncthreads (); }
if (Tid < 32) {
if (blockSize >=) Sdata[tid] + Sdata[tid + 32];//for every time a warp to execute one instruction
__syncthreads ();
if (blockSize >=) Sdata[tid] + + Sdata[tid + 16];
__syncthreads ();
if (blockSize >=) Sdata[tid] + + Sdata[tid + 8];
__syncthreads ();
if (blockSize >= 8) Sdata[tid] + + Sdata[tid + 4];
__syncthreads ();
if (blockSize >= 4) Sdata[tid] + + Sdata[tid + 2];
__syncthreads ();
if (blockSize >= 2) Sdata[tid] + + Sdata[tid + 1];
__syncthreads ();
}
if (tid = = 0) c[blockidx.x] = sdata[0];
We'll loop through the reduction, eliminating the hassle of thread alignment.
Cuda's own often-made stupid mistake.