The improvement of Reduction summation on GPU Using CUDA

Source: Internet
Author: User

We can never is satisfied with the program just only running correctly. The reduction summation program described on previous blog post needs to be optimized.

1.make the best use of hardware and does not forget cpus!

During the second part in the reduction summation, the amount of data to be calculated have been greatly reduced when the Second kernel function runs. At this time, it equals the number of threads per block. Given the difference in architecture between the CPU and GPU device, CPUs is designed for running a small number of pote Ntially quite complex tasks,while GPUs is designed for running a large number of potentially quite simple tasks. When you had a small amount of Data,don ' t forget Cpus,which can provide much faster computing rate than GPUs.

We can delete the second kernel function,pass the partial sum within every block to CPU and add them on the CPU.

cudamemcpy (a,dev_a,n*sizeof(int), cudamemcpydevicetohost); int c=0;  for (int i=0; i<blockpergrid;i++) {     c=+a[i];}

2.the appropriate number of threads per Block:not the more,the better.

As we all know, if there is too few threads, GPUs can ' t hide memory latency using the capacity to handle data. Therefore we ' d better not choose too few threads.

However,as We consider the number of threads,it would make a difference when we had synchronization points in the kernel. The number of threads per block not the more,the better.

The time to execute a given block is undefined. A Block cannot is retired from the SM until it ' s completed its entire execution. Sometimes,all other idle warps is waiting for a single warp to complete,making the SM also idle.

It follows the larger the thread block,the more potential to wait for a slow warp to catch up. As a general,the value 100% utilization across all levels of the hardware. We had better aim for either 192 or 256.Or your can look up the table of utilization and select the smallest number of thre Ads that gives the highest device utilization.

3.not too much more branches

As the hardware can only fetch a single instruction stream per warp and if branches appear, some of the threads that don ' t Meet the condition would stall,making the device utilization rate decrease. However, the actual scheduler in terms of instruction execution are half-warp Based,not warp based. Therefore we can arrange the divergence to fall on a half warp (16-thread) Boundary,then It can execute both sides of the Branch condition.

if ((thread_idx%) <) {     do  something;} Else {    do  something;}

However,it just happens when the data across memory is continuous. Sometimes we can supplement with zeros behind the Array,just as the previous blog mentioned,to a standard length of the Integral multiple of 32.That can help you keep the number of branches to a minimum.

The improvement of Reduction summation on GPU Using CUDA

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.