Introduction to Cuda C Programming-Programming Interface (3.2) Cuda C Runtime

Source: Internet
Author: User

When Cuda C is run in the cudart library, the application can be linked to the static library cudart. lib or libcudart. A. The dynamic library cudart. dll or libcudart. So. The Cuda dynamic link library (cudart. dll or libcudart. So) must be included in the installation package of the application.

All running functions of Cuda are prefixed with Cuda.

As mentioned in the heterogeneous programming section, the Cuda programming model assumes that the system is composed of a host and a device with its own memory. The device memory section provides an overview of the runtime functions used to manage the device memory.

The shared memory section describes the usage of the shared memory mentioned at the Thread level to maximize performance.

Page-locked host memory This section describes the page-locked memory, which must occur simultaneously with the kernel function during data exchange between the host and the device.

Asynchronous Parallel Execution This section describes the concepts and APIs of Asynchronous Parallel Execution used at different levels in the system.

The multi-device system section shows how the programming model extends the system where the same host is connected to multiple devices.

This section describes how to properly check errors generated during running.

Call Stack This section describes the runtime functions used to manage the Cuda C call stack.

Texture and surface memory this section displays the texture and surface memory of other methods that access the device memory, and also displays a subset of GPU texture hardware.

Graphic interoperability describes various runtime functions that provide interaction with two major graphics APIs-OpenGL and direct3d.

3.2.1 Initialization

There is no explicit runtime initialization function. Runtime functions (more specifically, functions in the version control section of the device and reference manual) will be initialized during the initial call. During running, it is important to remember that the scheduled running time and the function that interprets the error code will be called.

During initialization, A Cuda context is created for each device in the system at runtime (the context section describes the Cuda context ).

To be continued...

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.