In 3D interactive graphics applications, you often use the mouse to select a graphic. The Mechanism Implemented by this function is based on the mouse picking algorithm. This article describes how to use d3d to pick up the mouse of an element. In order to make the discussion simple, this article assumes that the reader understands the d3d coordinate transformation process and basic graphics knowledge. If you have difficulty reading this article, please refer to the relevant materials. **1. What is pickup and what can it do?**
First, the pick-up operation means that when we click an element on the screen, the application can return a flag of the element and some related information. Anyone with Graphic programming experience knows that this information indicates that we have control over this element. we can delete, edit, and treat this element as needed, what do you really want to do is your own business ^_^. **2. Steps and Implementation of the pick-up operation**
The idea of the algorithm is very simple: Get the screen coordinate at the mouse click, and use the projection matrix and the observation matrix to convert the coordinates to a line of light injected into the scene through the viewpoint and mouse click, if the light is intersecting with the triangle of the scene model (this article only deals with triangle elements), the information of the triangle is obtained. The method described in this article not only obtains an index number of a triangle, but also obtains the center of gravity coordinates of the intersection. From a mathematical point of view, as long as we obtain the direction vector of the ray and the exit point of the ray, we have the condition to determine whether the ray and a triangular plane of the space are intersecting, this article mainly discusses how to obtain these conditions and describes the ray triangle intersection judgment algorithm and the Common Implementation Method of d3d. According to the processing sequence of the pick-up operation, the process can be divided into the following steps in sequence: **2.1. Transform and obtain the ray vector (DIR) through the viewpoint and click point on the screen)**
Before giving a detailed introduction, we need to briefly describe the general process of d3d coordinate conversion for your convenience. For example:
Therefore, we need to use a series of inverse transformations to obtain the representation of the values we care about in the world coordinates. **2.1.1 determine the screen coordinates of the mouse selection point**
This step is very simple. Windows provides an API for us to get the screen coordinates. We can use getcursorpos to get the cursor position, then, use screentoclient to convert the coordinates to the client coordinate system (in pixels, the coordinate origin is in the upper left corner of the window area) and set the coordinates to (point screenpt ). **2.1.2 get the representation of Dir in the observed coordinate space**
In the observed coordinate system, DIR is a ray starting from the source of the observed coordinate. Therefore, we only need to determine the point where the ray passes through to obtain its representation in the observed coordinate system. Let us assume that another point on the required Ray is the intersection of the ray and the perspective projection plane section near the shear surface. For the most common perspective projection, after projection transformation, the Perspective Projection equal part is converted into a cube of 1/2 (please allow me to call it ^ _ ^ because it is half the size of a cube, X, the length of the Y-direction side is 2, and the Z-direction side is 1)
The projection coordinate system uses the center of the near shear plane as the coordinate origin point. The Cube looks at the corresponding area of the graphic program from the Z axis, and finally the near shear plane (front shear plane) the relationship between the last point and the screen coordinates is shown in:
Projpt. Y = (screenpt. Y-screenheight/2)/screenheight * 2; (formula 2) Projpt. z = 0; (the actual value can be obtained without affecting the final result. To make the processing simple, we take the modified value as 0, indicating that the point is obtained on the near shear surface.) After obtaining the projpt, what we need to do is to convert the coordinates of the point from the projection space to the view space. According to the definition of the perspective projection, Hypothetical point (projpt. X, projpt. Y, projpt. Z) The Second coordinate is (projpt. x * projpt. W, projpt. y * projpt. W, projpt. z * projpt. W, projpt. W) We can use the gettransform (d3dts_projection, & projmatrix) function to obtain the projection matrix projmatrix. Then, based on the transformation relationship between the observed space and the projection space (Projpt. x * projpt. w, projpt. y * projpt. w, projpt. z * projpt. w, projpt. w) = (viewpt. x, viewpt. y, viewpt. z, 1) * pprojmatrx; Based on definition and graphics principles
Based on the proportional relationship, the relationship between screenpt and the point projpt in the projection space is Assume that the width of the graphic program window is screenwidth and the height is screenheight, Projpt. x = (screenpt. X-screenwidth/2)/screenwidth * 2; (Formula 1) So, (Projpt. x * projpt. W, projpt. y * projpt. W, projpt. z * projpt. W, projpt. W) = (Viewpt. x * projmatrix. _ M11, Viewpt. y * projmatrix. _ m22, Viewpt. z * q-qzn, Viewpt. Z) So Projpt. x * projpt. W = viewpt. x * projmatrix. _ M11 Projpt. y * projpt. W = viewpt. y * projmatrix. _ m22 Projpt. z * projpt. W = viewpt. z * q-qzn (Note: projpt. z = 0) Projpt. W = viewpt. Z; Resolution Viewpt. x = projpt. x * Zn/projmatrix. _ M11; Viewpt. Y = projpt. y * Zn/projmatrix. _ m22; Viewpt. z = Zn; Now we have obtained the coordinates of the intersection of rays and the near shear plane in the observed coordinate system. Now we have the starting point of rays (, 0) and the Ray Direction (viewpt. x, viewpt. y, viewpt. z), then the representation of the direction vector of the ray in the observation space can be determined as (viewpt. x-0, viewpt. y-0, viewpt. z-0), simplifying the three components in addition to the near shear surface zcoordinate Zn, this direction vector can be written Dirview = (projpt. X/projmatrix. _ M11, projpt. Y/projmatrix. _ m22, 1) Substitute Formula 1 and formula 2 Dirview. x = (2 * screenpt. X/screenWidth-1)/projmatrix. _ M11; Dirview. Y = (2 * screenpt. Y/screenHeight-1)/projmatrix. _ m22; Dirview. z = 1; The screenwidth and screenheight can be obtained through the target surface (d3dsurface_desc) of the backbuffer displayed in the image. The surface is created by the user during program initialization. **2.1.3 convert dir to the world coordinate space and obtain the coordinates of the observed point in the world coordinate system.**
Because the final operation is performed in the world coordinate space, we also need to convert the vector dirview from the observation space to the vector dirworld in the world coordinate space. Because Dirview = dirworld * viewmatrix; Viewmatrix is the observation matrix, which can be obtained using the gettransform (d3dts_view, & viewmatrix) function in d3d. Therefore, dirworld = dirview * inverse_viewmatrix, where inverse_viewmatrix is The inverse matrix of viewmatrix. The coordinates of the observation point in the observation coordinate system are originview (,). Therefore, the coordinates in the world coordinate system can also be reversed to the world coordinate system using the viewmatrix matrix, in fact, we can easily determine that its representation in the world coordinate system is: Originworld = (inverse_viewmatrix. _ 41, Inverse_viewmatrix. _ 42, Inverse_viewmatrix. _ 43, 1 ); By now, the conditions for determining whether a ray and a triangle plane are intersecting are completely met. **2.2 Use the ray vector to calculate the intersection of all triangle elements in the scenario to obtain the Triangle Index value and the center of gravity coordinate.**
There are two ways to achieve this step: The first method is very simple. You can use the extended function d3dxintersect provided by d3d to get everything done with ease. See 2.1 The second method is to complete the intersection algorithm of Ray triangles by ourselves based on the knowledge of spatial analytic ry. Generally speaking, it is enough to use the first method in the application, but if we want to go deeper, we must understand the mathematical algorithm of intersection detection so that we can freely expand and meet different needs, for details, see section 2.2. The following describes two implementation methods: **2.2.1 intersection of d3d extension Functions**
This method is very simple and easy to use. For applications, we should try our best to use this method. After all, the efficiency is much higher than self-writing. Actually, there is nothing to talk about. Let's talk about the d3dxintersect function. D3d SDK the function declaration is as follows: Hresult d3dxintersect ( Lpd3dxbasemesh pmesh, Const d3dxvector3 * praypos, Const d3dxvector3 * praydir, Bool * Phit, DWORD * pfaceindex, Float * pu, Float * PV, Float * pdist, Lpd3dxbuffer * ppallhits, DWORD * pcountofhits ); L pmesh points to an id3dxbasemesh object. The simplest way is to obtain it from the. X file and describe the information of the Triangle Element Set for intersection detection. For details, see direct9 SDK L praypos points to the ray emitting point L praydir points to the vector in the direction of the previously worked-out Ray. L Phit: When an intersection graph element is detected, it points to a true value. If it does not intersection any element, it is false. L Pu is used to return the center of gravity coordinate U component L PV returns the V component of the center of gravity coordinate. L pdist returns the length of the radiation point to the intersection point. Note: The above red font indicates the most recent returned result (that is, * the smallest pdist value) L ppallhits is used to return all the results of the intersection if multiple intersecting triangles exist. L pcountofhits returns the total number of triangles that intersect with the ray. Supplement: Concept of center of gravity coordinates The concept of center of gravity coordinates is used for PU and PV, which is described below. A triangle has three vertices. In the dikar coordinate system, assume V1 (x1, Y1, Z1), V2 (X2, Y2, Z2), V3 (X3, Y3, z3), the coordinates of any point in the triangle can be expressed as Pv = V1 + U (V2-V1) + V (V3-V1), so we know the coordinates of three vertices, any point can be expressed by coordinates (u, v). The parameter U controls the weight of V2 in the result, and the parameter V controls the excessive power of V3, in the end, 1-u-v controls V1 to take up multiple power values. This coordinate definition method is called Center of Gravity coordinate. **2. Math algorithm of 2.2 X-ray triangle intersection**
After all, d3d extended functions sometimes fail to meet the specific requirements. Only by mastering this method can we obtain the maximum degree of freedom to control and modify algorithms at will. Known conditions: ray source point orginpoint, triangle three vertices V1, V2, V3, Ray Direction dir (All in the form of three-dimensional coordinate vectors) Objective: To determine whether a ray and a triangle are at the same time. If the X-ray and a triangle are at the same time, the distance from the X-ray origin to the intersection is T. We can first assume that the intersection of rays and triangles is the intersection (note the following are vector operations, * number multiplication, dot (x, y) x, y point multiplication, cross (x, y) x, Y cross multiplication; U, V, T is a scalar) Then: Intersectpoint = V1 + u * (V2-V1) + V * (V3-V1 ); Intersectpoint = originpoint + T * dir; So Orginpoint + T * dir = V1 + u * (V2-V1) + V * (V3-V1 ); Sorted:
This is a simple linear equations, if there is a solution, the determinant [-Dir, V2-V1, V3-V1] is not 0. According to the meaning of T, U, V when T> 0, 0 <u <v <u + v <1, the intersection point is inside the triangle, After solving this equation, we can get the values we care about. The specific solution is not described in detail. The Klein law is enough (see linear algebra for details): the distance from the ray origin to the intersection point t, and the center coordinate of the intersection (u, v ). The following is the implementation code in the direct 9 SDK sample program. Intersecttriangle (const d3dxvector3 & orig, Const d3dxvector3 & Dir, d3dxvector3 & v0, D3dxvector3 & V1, d3dxvector3 & V2, Float * t, float * u, float * V) { // Calculate the vector of two edges D3dxvector3 edge1 = V1-V0; D3dxvector3 edge2 = v2-V0; D3dxvector3 PVEC; D3dxvec3cross (& PVEC, & Dir, & edge2 ); // If det is 0, or close to zero, the rays and the triangle surface are in common or parallel, not intersecting. // Here det is equivalent to the above, Float det = d3dxvec3dot (& edge1, & PVEC ); D3dxvector3 tvec; If (det> 0) { Tvec = orig-V0; } Else { Tvec = V0-orig; Det =-Det; } If (det <0.0001f) Return false; // Calculate the U and test whether it is legal (within the triangle) * U = d3dxvec3dot (& tvec, & PVEC ); If (* u <0.0f | * u> DET) Return false; // Prepare to test v Parameter D3dxvector3 qvec; D3dxvec3cross (& qvec, & tvec, & edge1 ); // Calculate the U and test whether it is legal (within the triangle) * V = d3dxvec3dot (& Dir, & qvec ); If (* v <0.0f | * u + * V> DET) Return false; /* Calculate T and compress T, U, and V into valid values (note that the preceding values T, V, and u are different from the corresponding values in the algorithm description and take a coefficient DET ), note: Division is required for this step, so it is placed at the end to avoid unnecessary operations and improve algorithm efficiency */ * T = d3dxvec3dot (& edge2, & qvec ); Float finvdet = 1.0f/det; * T * = finvdet; * U * = finvdet; * V * = finvdet; Return true; } **2.2.3 after the pick-up is complete, calculate the common quantities we care about Based on the obtained Central Coordinates.**
Based on the center of gravity coordinates (u, v), we can easily calculate the color of the difference between the texture coordinates and the intersection. Assume that the texture coordinates are used as an example to set V1, V2, the texture coordinates of V3 are T1 (TU1, TV1), T2 (tu2, TV2), and T3 (tu3, TV3 ). Intersectpointtexture = t1 + U (T2-T1) + V (T3-T1) **3. Conclusion and statement**
OK. Now we have finished introducing the related knowledge about pickup. The younger brother wrote this article for the first time and did not know whether to clarify the problem. I hope it will help you, if you have any questions, please email me: jzhang1@mail.xidian.edu.cn or leave a message on my website: www.heavysword.com Disclaimer: The purpose of this article is to facilitate the Learning Service for the majority of d3d learners. The algorithms in this article are the author's reference to the relevant literature. The author has no intention to make these data his own achievements, the ownership of the original algorithm is owned by the author (see references). The Code in this article is the sample content of the d3d SDK, which is explained by the author. The code is copyrighted by Microsoft. **4. References**
1. Microsoft DirectX 9.0 SDK, Microsoft 2. Fast, minimun storage Ray/triangle intersection, Tomas Moler, Ben trumbore |