Our algorithms include multiple kernel. If we use two global memory to store input/output, it is a waste of time.
For a single kernel, computing can be used to hide the latency of global memory only when the calculation amount is large. If you simply extract a piece of data, perform an operation, and write it back, most of the time is wasted waiting for memory transfer.
At this time, texture can be used as the input. Of course, when writing data, it must be written to global memory.
// Declare texture and linear memory
Texture <...> intersrctex;
Texture <...> interdsttex;
Float * d_intersrcdata, * d_interdstdata;
// Init
Cudamallocpitch (void **) & d_intersrcdata, & pitch, ppl * sizeof (float), LPF );
Cudamallocpitch (void **) & d_interdstdata, & pitch, ppl * sizeof (float), LPF );
Cudabindtexture2d (0, intersrctex, d_intersrcdata, floattexdesc, PPL, LPF, pitch );
Cudabindtexture2d (0, interdsttex, d_interdstdata, floattexdesc, PPL, LPF, pitch );
_ Global _ void func1 (texture <...> inputtex, float * outdata)
{
......
}
_ Global _ void func2 (texture <...> inputtex, float * outdata)
{
......
}
// Call the different kernels like this way
Void main ()
{
Func1 (intersrctex, d_interdstdata );
Func2 (interdsttex, d_intersrcdata );
}
In this way, at least the read data can be faster.