UV Atlas operations are useful in some scenarios, such as lightmap baking in common scenarios. In this case, we need to generate light map uv for the ry element in the entire scenario, that is, to project it to a texture, all ry elements are required to correspond to the unique light map UV (that is, there is no overlap in this texture to get the correct baking information ). UV Atlas is actually a traditional NP hard problem. Generally, it is implemented through some optimization methods and involves a wide range of aspects. In DX
The SDK also provides APIs for this operation:D3dxuvatlascreate.
The d3dxuvatlascreate method is used to pass in a d3dxmesh data source, and then it returns a mesh containing UV Atlas results after processing. Note the following:
- The imported source mesh must contain at least two basic information: location information and a UV channel. However, the data in the UV channel in the source mesh does not need to be processed in d3dxuvatlascreate, but mainly because the returned mesh and the imported mesh contain the same vertex description structure, therefore, a UV channel is required to save the UV value after Atlas in the result mesh. While some other parameters are some functional settings, such as the result texture size, the gap between two UV, such as specific can refer to: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205479 (V = vs.85). aspx
- The mesh and source mesh must contain the same number of triangles, but may contain different numbers of vertices (the number of vertices in the result mesh is greater than or equal to the number of vertices in the source mesh ). This is actually easy to understand, because there may be many vertices in the space at the same location, so they are merged to become a vertex, but the corresponding UV after atlas may be different, therefore, the vertex must be separated.
- The mesh size processed by d3dxuvatlascreate is: the number of vertices and triangles cannot exceed 65535. The impact of this restriction is still relatively large, especially when light map UV is generated for larger scenarios. In this way, the entire scenario needs to be further divided, and each part needs to be processed separately. After the operation is completed, the Atlas texture of each part will be merged, here we need to modify the UV.
In fact, the use of d3dxuvatlascreate is relatively simple, but the results are not very satisfactory. For example, the results obtained after UV generation using d3dxuvatlascreate in the preceding scenario are shown in, there are many broken surfaces, and when the mesh surface is more complex, the UV after Atlas is not very neat, and the layout is not too neat, in this way, the problem of adding texture to read interpolation may become even more difficult.
The result of better sorting should be shown:
Concerning the Atlas of UV, this is also a widely studied problem (Mesh parameterization ), the classic solution is least square Conformal Map (using Conformal Map to parameterize the problem and then using Least Square to obtain a more optimized solution ), there are also many open-source geometric algorithm libraries that have implemented this algorithm, such as opennl and cgal (it seems that they are all done by INRIA) and can be used. In addition, the following websites are also useful for reference:
Http://www.inf.usi.ch/hormann/parameterization/index.html
Http://the-witness.net/news/2010/03/graphics-tech-texture-parameterization/
Http://www.fseraph.com /? P = 408