Apple's Metal Project

Source: Internet
Author: User

Basic buffers

Methods are not allowed when you pass too much data to the vertex shader (greater than 4096 bytes) setVertexBytes:length:atIndex: , and you should use setVertexBytes:length:atIndex: methods to improve performance.
At this point, the parameter should be MTLBuffer of type and memory that can be accessed by the GPU.
_vertexBuffer.contentsmethod returns a memory interface that can be accessed by the CPU, that is, the memory is shared by the CPU and the GPU.

Basic texturing

MTLPixelFormatBGRA8UnormThe pixel format.?

The coordinates of the texture
?

Reading A texel is also known as sampling

Hello Compute

Data-parallel computations using the GPU.

In the history of GPU development, the architecture of parallel processing has remained unchanged, while the programmable features of the processing core are becoming stronger. This allows the GPU to move from Fixed-function pipeline to programmable pipeline, and also makes universal GPU programming (GPGPU) feasible.

An MTLComputePipelineState object can be generated directly from one kernel function .

 // Create a compute kernel functionid <MTLFunction> kernelFunction = [defaultLibrary newFunctionWithName:@"grayscaleKernel"];// Create a compute kernel_computePipelineState = [_device newComputePipelineStateWithFunction:kernelFunction

Parallel processing of image blocks

    // Set the compute kernel's thread group size of 16x16    _threadgroupSize = MTLSizeMake(16, 16, 1);    // Calculate the number of rows and columsn of thread groups given the width of our input image.    //   Ensure we cover the entire image (or more) so we process every pixel.    _threadgroupCount.width  = (_inputTexture.width  + _threadgroupSize.width -  1) / _threadgroupSize.width;    _threadgroupCount.height = (_inputTexture.height + _threadgroupSize.height - 1) / _threadgroupSize.height;    // Since we're only dealing with a 2D data set, set depth to 1    _threadgroupCount.depth = 1;   [computeEncoder dispatchThreadgroups:_threadgroupCount               threadsPerThreadgroup:_threadgroupSize];
CPU and GPU Synchronization

The CPU and GPU are two asynchronous processors, but they share the cache, so they need to be parallel while avoiding both reading and writing data.
?

In each frame, the CPU and GPU do not work at the same time, although it avoids the ability to read and write data at the same time, but degrades performance.
?

In, the CPU and GPU read and write the same data at the same time, causing competition.
?

Multiple buffers can be used to improve performance and avoid the simultaneous reading and writing of data. The CPU and GPU read and write the same buffers when they are different.
This is called when the GPU finishes executing command buffer handler .

 [commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer){    dispatch_semaphore_signal(block_sema);}];
LOD with Function Specialization

Level of detail (LOD)

The more realistic the details, the more resources are consumed. So make a tradeoff between performance and the richness of detail.

if(highLOD){    // Render high-quality model}else if(mediumLOD){    // Render medium-quality model}else if(lowLOD){    // Render low-quality model}

However, using the GPU to write the above code, the performance is not high. The number of instructions the GPU can parallelize depends on the number of registers allocated for the function. The GPU compiler needs to allocate the maximum number of registers that may be used for a function, even if some branches are never possible to execute. As a result, branch statements significantly increase the number of registers required and significantly reduce the number of parallel GPUs .

Apple's Metal Project

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.