This article from Baidu space-Tianya crazy man space http://hi.baidu.com/crazyonline/blog/item/e27312d58eb815cd50da4b03.html
Thanks to the original author S.
D3d implements bitblt capturing texture UV texture addressing
I was dull and always thought that d3d textures could only be based on the entire slip texture. Instead of selecting a part of the texture, positioning coordinates, and extending the texture like Win32's GDI. Oh, of course, here I am referring to encapsulating d3d into a 2D texture. The vertex format is the screen coordinate format after UV texture coordinates + rhw conversion.
As for the vertex format rhw, we all know that it is a symbol of coordinate transformation. However, I have never been clear about the UV texture addressing method, loop addressing, surround addressing, and so on. When you only know the 2D Paster, it is no problem to set the UV address to 1.0. It is completely unknown. Later, I encountered problems such as distortion and deformation of the UV texture and d3d texture images. Later, I was able to solve the problem, but I still don't know why. Recently, I finally suddenly realized that UV is actually used to specify the texture coordinate of the texture. This UV is a percentage. For example, if the length of an image is 100, U is set to 0.5, which means the coordinates are set to 100x0.5 = 50. This is a little different from the srcx, srcy, width, and height commonly used by bitblt. However, you can encapsulate a bitblt function in d3d textures. For example:
Bool cd3d9texture: bltfast (float fdstx, float fdsty, int ndstwidth, int ndstheight, float fsrcx, float fsrcy, int nscwidth, int nscheight, int alphavalue/* = bltcopy */) {d3dtlvertex V [4]; DWORD dwdiffuse = d3dcolor_argb (alphavalue 255,255,255); memset (v, 0, sizeof (v); V [0]. X = V [3]. X = (float) (fdstx); V [1]. X = V [2]. X = (float) (fdstx + ndstwidth); V [0]. y = V [1]. y = (float) (fdsty); V [2]. y = V [3]. y = (float) (Fdsty + ndstheight); V [0]. rhw = V [1]. rhw = V [2]. rhw = V [3]. rhw = V [0]. z = V [1]. z = V [2]. z = V [3]. z = 0.5f; V [0]. diffuse = V [1]. diffuse = V [2]. diffuse = V [3]. diffuse = dwdiffuse; // sets UV texture coordinates and captures textures. Here, the coordinates (x, y) V [0] on the left side of the (u, v) texture are determined. tu = V [3]. tu = fsrcx/(float) m_nmemorywidth; V [0]. TV = V [1]. TV = fsrcy/(float) m_nmemoryheight; // here is the right bottom coordinate (x, y) V [2] of the (u, v) texture. tu = V [1]. tu = float (fsrcx + nscwidth)/(float) m_nmemorywi DTH; V [2]. TV = V [3]. TV = float (fsrcy + nscheight)/(float) m_nmemoryheight; If (m_ptexture = NULL) {return false;} If (! Identifier) {m_pd3d9device = cd3d9device: created3d9device ();} assert (m_pd3d9device); m_pd3d9device-> settexture (0, m_ptexture); m_pd3d9device-> setfvf (identifier ); m_pd3d9device-> drawprimitiveup (d3dpt_trianglefan, 2, (lpvoid) V, sizeof (d3dtlvertex); Return true ;}
The function bltfast is similar to GDI in d3d.Stretchblt function,The functions are exactly the same, but I have an extra translucent parameter in the last part, because d3d does not have the hdcsrc parameter or hdcdest parameter.
The most classic part is the following: After the texture is loaded into the memory, the width and length of the texture may change [if the length and width of the file are not the power of 2 ], UV calculation must be performed based on the memory size and width. Otherwise, the texture will be distorted.
// Sets UV texture coordinates and captures textures.
V [0]. Tu = V [3]. Tu = fsrcx/(float) m_nmemorywidth;
V [0]. TV = V [1]. TV = fsrcy/(float) m_nmemoryheight;
V [2]. Tu = V [1]. Tu = float (fsrcx + nscwidth)/(float) m_nmemorywidth;
V [2]. TV = V [3]. TV = float (fsrcy + nscheight)/(float) m_nmemoryheight;
In this way, you can intercept images and map images. In addition, d3d comes with the scaling function, which is refreshing.