Directx9.0 Study Notes

Source: Internet
Author: User

Chapter 4: create and manage direct3d Devices

1.1 What Is A Direct3D Device

Each graphic API has an entity that maintains the overall state of the drawing function. For example, Windows GDI uses DC, Java uses Graphics objects, and DirectX uses IDirect3DDevice9 (this digital table version ). Direct3D devices manage and maintain everything from texture memory allocation to transformation matrix to mixed state. It has three basic types:

  • D3ddevtype_hal: hardware is used for rendering, and function implementation depends on hardware.
  • D3ddevtype_ref: Provides all functions to simulate all possible DirectX features using software.
  • D3ddevtype_sw: Use a third-party software Renderer.

1.2 How to Use Direct3D Devices

  • Step 1: Create a direct3d object. (Use the direct3dcreate9 function)
  • Step 2: learn more about hardware. (Two functions are available: getdevicecap and enumadaptermode)
  • Step 3: Create a direct3d device. (Createdevice function and d3dpresent_parameters structure)
  • Step 4: reset the lost device. (Testcooperativelevel queries the device status and reset recovery)
  • Step 5: destroy the device. (Release)

1.3 Use Direct3D devices for rendering

  • Use beginscene () to start rendering.
  • End rendering with endscene.
  • Use present to notify devices to draw images on the screen

Chapter 1: Everything starts from the vertex

2.1 What is a vertex

A vertex can be defined as a point in a space and serves as a constructor for rendering points, lines, and surfaces. The vertex format defines the attributes of a vertex. In DirectX, the format of the data structure can be changed to meet the requirements of a specific rendering task. It is called the Flexible Vertex Formats, FVFs ). Note that the required FVF IDs are listed in the sequence defined by DirectX.

2.2 create Vertex

  • Create vertex buffer

What actually needs to be done is to create a vertex buffer. Graphics devices try to put the buffer in the memory of the video memory or accelerate the graphics interface, so that the graphics card can obtain data as quickly as possible. You can use CreateVertexBuffer to create a vertex buffer.

  • Set and change Vertex

Because the vertex cache can exist in the video memory or in the memory managed by the device,Therefore, you cannot directly change the value of vertex data. To access these values, you must call the Lock function.It returns a pointer to a vertex and can be used to change the value of vertex data. Once the vertex value is set, you can call the Unlock function to return new data to the device.

2.3 rendering vertices

  • You can use SetStreamSource to specify where the device vertex is stored. Connect the vertex buffer with the data stream to transmit the information of the ry to the drawing pipeline.
  • Set the flexible vertex format to notify the device of the vertex type being processed and call SetFVF.
  • Let the device actually draw the vertex and use the DrawPrimitive function.

2.4 performance considerations

General principle: batch processing is everything. Always maximizes the work done in a single call, and minimizes the number of times the device switches to a new state.

Chapter 2: Use Transform

What does 3.1 transformation mean?

In 3D graphics, world transformations are used to determine the position of an object in a scene, observation transformations are used to determine the camera's position, and shadow transformations are used to determine the camera's "lens" features, map the information to the actual pixel using the viewport.

  • World change

The most basic usage of world matrices is to simply move a predefined object in a virtual space, but there is another benefit, it can be easily used to draw multiple instances of the same object by using a vertex buffer and several different transformation matrices. Although the hardware must transform the vertices multiple times, using a set of ry can save the memory and the bandwidth of the video card.

  • Observation and Transformation

Given that world transformations define the positions and orientations of objects in a space, observed transformations define the positions and orientations of cameras in the space. After the world transformation, it is converted to the vertices of the world coordinates, called the vertices of the eyes after the observed transformation. Once an object is in the eye coordinate, it can know the basic Geometric relationship between the object and the observer. In other words, we can use world transformations to build a 3D world and use observation transformations to Move cameras in this world.

  • Projection Transformation

The projection matrix encodes the characteristics of a virtual camera. These attributes create an observed intercept, which is the space size that the camera can observe. The distance plane and the near plane define the visible distance of the object. objects that are too far or too close will not be included in the final rendering. The field of view (FOV) angle defines the width and height of observation. The D3DX library contains 10 different functions for creating a projection matrix. Each of the five types has two versions: Left-hand and right-hand.

3.2 conversion and 3D Devices

Once there is a transformation matrix, you can use SetTransform to set the transformation. Its first parameter specifies the transformation to be set. There are three types: D3DTS_PROJECTION, D3DTS_VIEW, and D3DTS_WORLD. You can use the ID3DXMatrixStack help interface in the D3DX library to implement a matrix stack and manage a complex transformation.

3.3 View

After the three conversion steps are completed, the device still needs to determine how the data is finally mapped to each pixel. In this case, define a view to help the device complete the task. Under normal circumstances, the view is defined by the window size in the window mode application, or by the screen resolution in the full screen mode application. Therefore, you do not need to set the view obviously in many cases. However, DirectX allows programmers to specify a part of a window as a view. This setting is useful when multiple scenes are rendered in the same window.

The d3dviewport structure is used to define the view. This structure defines the rectangular part of the view in the window or in the screen, as well as the near-zplane and far-zplane. You can use setviewport to set a new view. In this case, getviewport is often used to save a copy of the old view. Note that the aspect ratio of the view must be consistent with the feature ratio of the projection matrix. If the aspect ratio does not match, the object looks as squashed.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.