OpenCL copies the array from memory to memory, and openclcopy

Source: Internet
Author: User

OpenCL copies the array from memory to memory, and openclcopy

I wanted to optimize the previous blog, but the optimization effect was not obvious. But remember the knowledge points.

The original intention is to move the computing of the defined domain in the previous blog to the CPU for computing. Because the computing of the defined domain is the same for every kernel, direct reading can further reduce the kernel execution time.

My idea was to send this piece of data to the memory before sending it to the register. The read time from the register should be very fast. In this way, the calculation time should be changed to the read time. Of course, whether the time for reading registers is shorter than the calculation time should be questioned, but for complicated calculations, I think direct reading should be faster than calculation. For this part of data, CPU Computing should be faster than GPU. Of course, we should also consider the size of data. It takes time to move from memory to memory.

1. C ++ code
..................int ksize = 11;float sigma_d = 3.0;float *dkl = new float[ksize*ksize];for (int i = -ksize/2; i <= ksize/2; i++){    for (int j = -ksize/2; j <= ksize/2; j++){        dkl[(i+ksize/2)*ksize + (j+ksize/2)] = -(i*i + j*j) / (2 * sigma_d*sigma_d);    }}cl_mem d_dkl;d_dkl = clCreateBuffer(context, CL_MEM_READ_ONLY, ksize*ksize*sizeof(float), NULL,NULL);clEnqueueWriteBuffer(commandQueue, d_dkl, CL_TRUE, 0, ksize*ksize*sizeof(float), dkl, 0, NULL, NULL);........................errNum |= clSetKernelArg(kernel, 2, sizeof(cl_mem), &d_dkl);errNum |= clSetKernelArg(kernel, 3, sizeof(cl_mem), &ksize);........................delete[] dkl;...................

It mainly refers to the usage of clCreateBuffer and clEnqueueWriteBuffer functions.

2. kernel code
const sampler_t sampler = CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_NEAREST;kernel void bilateralBlur(read_only image2d_t src, write_only image2d_t dst, __constant float* dkl, int ksize)  {    int x = (int)get_global_id(0);      int y = (int)get_global_id(1);      if (x >= get_image_width(src) || y >= get_image_height(src))          return;      float sigma_d = 3.0;    float sigma_r = 0.1;    float4 fij = read_imagef(src, sampler, (int2)(x, y));    float alpha = 0.2;    float4 fkl;    float4 rkl;    float4 wkl;    int index = 0;    float4 numerator = (float4)(0.0f,0.0f,0.0f,0.0f);    float4 denominator = (float4)(1.0f, 1.0f, 1.0f, 1.0f);    for (int K = -ksize / 2; K <= ksize / 2; K++)    {        for (int L = -ksize / 2; L <= ksize / 2; L++)        {            fkl = read_imagef(src, sampler, (int2)(x + K, y + L));            rkl.x = -(fij.x - fkl.x)*(fij.x - fkl.x) / (2 * sigma_r*sigma_r);            rkl.y = -(fij.y - fkl.y)*(fij.y - fkl.y) / (2 * sigma_r*sigma_r);            rkl.z = -(fij.z - fkl.z)*(fij.z - fkl.z) / (2 * sigma_r*sigma_r);            wkl.x = exp(-dkl[index] + rkl.x);            wkl.y = exp(-dkl[index] + rkl.y);            wkl.z = exp(-dkl[index] + rkl.z);            index++;            numerator.x += fkl.x * wkl.x;            numerator.y += fkl.y * wkl.y;            numerator.z += fkl.z * wkl.z;            denominator.x += wkl.x;            denominator.y += wkl.y;            denominator.z += wkl.z;        }    }        float4 gij = (float4)(0.0f, 0.0f, 0.0f, 1.0f);    if (denominator.x > 0 && denominator.y > 0 && denominator.z)    {        gij.x = numerator.x / denominator.x;        gij.y = numerator.y / denominator.y;        gij.z = numerator.z / denominator.z;        gij.x = fij.x*alpha + gij.x*(1.0 - alpha);        gij.y = fij.y*alpha + gij.y*(1.0 - alpha);        gij.z = fij.z*alpha + gij.z*(1.0 - alpha);    }    write_imagef(dst, (int2)(x, y), gij);}

Compared with the code of the previous blog, the calculation of dkl is mainly changed to read, and ksize is also passed in through parameters.

3. Results

Compared with 3.42ms in the previous article, it was optimized to be a few milliseconds at zero. However, considering CPU computing, the optimization should be smaller, less, or slightly worse.

Of course, the computation here is simple. For complex computation, you can still consider this optimization method.

 

Next, we will consider memory optimization to increase the granularity.

 

 

Code: http://download.csdn.net/download/qq_33892166/9771206

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.