Use a macro to define cuda_kernel_loop.
In common. HPP.
# DefineCuda_kernel_loop (I, n )\
For(IntI = blockidx. x * blockdim. x + threadidx. X ;\
I <(n );\
I + = blockdim. x * griddim. X)
First, let's look at the design of the dimensions of the threads and thread blocks obtained by caffe pipeline,
We can also see from common. HPP
Caffe_cuda_num_threads
Caffe_get_blocks(ConstIntN)
It is obviously one-dimensional.
Check the format of cuda_kernel_loop,
For(IntI = blockidx. x * blockdim. x + threadidx. X;
I <(N );
I + = blockdim. x * griddim. X)
Blockdim. x * griddim. X indicates the number of all threads in the thread grid.
N indicates the total number of elements to be processed by the kernel function.
Sometimes, n is greater than blockdim. x * griddim. X, so it cannot process an element in a thread.
Through the above method, let a thread serial (for loop) process several elements.
In fact, this is a frequently used trick. You have to learn from it.
Let's take a look at the implementation of this core function.
Template <typename dtype>
_ Global _ void mul_kernel (const int N, const dtype *,
Constdtype * B, dtype * Y)
{
Cuda_kernel_loop (index, n)
{
Y [Index] = A [Index] * B [Index];
}
}
It is obviously the dot product of two vectors.
Because the vector dimension may be greater than the total number of threads in the kernel function thread cell.
Therefore, some threads can process several elements serially.
Caffe source code analysis -- math_functions.cu code Research