Process vertices -- create your own vertex format

Source: Internet
Author: User
Problem

The vertex is used to store data sent from an XNA project to the video card. A vertex format contains the description of the data stored in the vertex. The XNA framework has its own default vertex format, from simple VertexPositionColor to VertexPostionNormalTexture format.

However, if you need vertex with additional data, such as tangent or time data, you need to define your own vertex format. This step is required if you want to use this data in the vertex shader. Therefore, you only need to define a custom vertex format when writing custom vertex and pixel shader.

Solution

The vertex format defines the types of data stored in the vertex and the data that can be accessed by vertex shader. For example, when VertexPositionColor format is used for VertexPositionColor vertices, The VertexPositionColor structure informs the video card that the vertices contain location and color data and where the data can be found in the data stream.

The vertex format is the link between XNA code and vertex shader. First, the vertex format shows how data is located in the data stream. The gray curve 5-24 shows the location and color data are located in the data stream.

As long as the video card needs to draw a triangle from the vertex, it needs to restore the data stream to the vertex.

Then, the video card needs to separate each vertex into position and color data. Therefore, the vertex format also tells the video card where to cut the data stream, as shown in the gray curve below 5-24.

Finally, vertex shader is called for each reconstructed vertex using location and color data.

Figure 5-24 vertices indicate the data that can be found in the data stream.

Working Principle

In this tutorial, you will create your own VertexPositionColor structure to learn the basics. In the second part, you will extend the structure to a new custom vertex format.

Recreate the VertexPositionColor Structure

In this section, you will create a MyVertexPositionColor structure (the same as the VertexPositionColor structure) to learn the basics. The VertexPositionColor structure stores all necessary information. It consists of three parts:

  • The data of each vertex is stored as a Vector3 location and a Color.
  • The size of a vertex so that the video card can cut data streams into separate vertices.
  • VertexElements informs the video card of the data contained in each vertex and how the video card cut the vertex to obtain the corresponding data.
Step 2

First, add the structure:

public struct MyVertexPositionColor {    public Vector3 Position;     public Color Color;     public MyVertexPositionColor(Vector3 position, Color color)     {         Position = position;         Color = color;     }}

This structure can store a Vector3 and color indicating the location. This is enough to create a vertex and send them to the video card.

Step 2

You need to know how many bytes a vertex occupies so that the video card can perfectly cut the data stream into independent vertices. Add the following code lines to the structure:

public static readonly int SizeInBytes = sizeof(float) * (3 + 1); 

Because this information is the same for all vertices and only needs to be read, you can set it to static readonly.

Each vertex contains a Vector3 and a Color. Vector3 consists of three floating point numbers. Color is a floating point number. Therefore, the size of a vertex is (the size of the byte occupied by a floating point number) * (3 + 1 ).

Note:A floating point number is 4 bytes, so you can represent this value as 4*(3 + 1) = 16. See Figure 5-24. Verify that each vertex contains 16 bytes, and the sequence number starting from each new vertex is a multiple of 16. This is how the video card splits byte streams into independent vertices.

Step 2 shows how to cut byte streams into independent vertices.

Step 2

Finally, add the following code to the structure:

public static readonly VertexElement[] VertexElements = {    new VertexElement(0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default,VertexElementUsage.Position, 0),    new VertexElement(0, sizeof(float)*3, VertexElementFormat.Color, VertexElementMethod.Default, VertexElementUsage.Color, 0), }; 

The above information shows the data types contained in a vertex and the bytes of the data in the vertex. The first VertexElement indicates that each vertex contains location data, and the second VertexElement indicates color data. Let's discuss each parameter in sequence.

The first parameter indicates which data stream to obtain data. Only the advanced program uses multiple vertex data streams, so it is usually 0.

TIPS:Multiple vertex streams are very useful. If the Position data remains unchanged and the Color data needs to be updated frequently, you can divide the data into two vertex streams, and the Position data remains unchanged, you only need to pass the Color data from the CPU to the GPU.

The second parameter indicates where to find the data. It is the byte index of data in the vertex. Because Position data is the first data, you must locate it at the index 0. The Color data is after the Position data, so you must know that the Position data has several bytes. Position Data occupies three floating point numbers, so you specify it as sizeof (float) * 3 (because a floating point number is 4 bytes, it can also be expressed as 12 ). Take a look at 5-27: The Color data starts at 12 bytes.

The third parameter indicates the data format stored in the data stream. It is the base type of the data. It is Vector3 for Position and Color for Color.

The fourth parameter is only used for advanced and specific hardware extensions. For example, in N-Patching, the number of triangles in a ry is adjusted based on the normal, which improves the shadow quality of the ry.

The fifth parameter is very important. It indicates the vertex shader input to which the data is linked. See Figure 5-24 again. Pay attention to the vertex shader input parameters. The two parameters are followed by a semantic POSITION0 and COLOR0. In this example, you link the first part of the data, including Position data, with POSITION0 semantics. The second part of the data is linked to the COLOR0 semantics.

The last parameter allows you to specify multiple versions of each semantics. It actually points to the last 0 of POSITION0 and COLOR0. The example is shown in the second half of this tutorial.

Step 3: the video card knows where to obtain the vertex data. In this example, it will pass the byte 0 to 11 (3 floating point numbers) to the POSITION0 input of vertex shader, 12 to 15 bytes (1 floating point number) to the COLOR0 input.

Complete MyVertexPositionColor Structure

The following are the current achievements:

public struct MyVertexPositionColor{    public Vector3 Position;     public Color Color;     public MyVertexPositionColor(Vector3 position, Color color)     {        Position = position;         Color = color;     }        public static readonly VertexElement[] VertexElements =     {        new VertexElement(0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0),         new VertexElement(0, sizeof(float)*3, VertexElementFormat.Color, VertexElementMethod.Default, ~ VertexElementUsage.Color, 0),    };         public static readonly int SizeInBytes = sizeof(float) * (3 + 1); }
Usage of the MyVertexPositionColor Structure

With this structure, you can define the vertices and convert them to a VertexBuffer stream. See the description in tutorial 5-4. But this time, you are using a custom format:

private void InitVertices() {    myVertexDeclaration = new VertexDeclaration(device, MyVertexPositionColor.VertexElements);     MyVertexPositionColor[] vertices = new MyVertexPositionColor[3];         int i = 0;     vertices[i++] = new MyVertexPositionColor(new Vector3(1, 1, -1), Color.Red);     vertices[i++] = new MyVertexPositionColor(new Vector3(3, 5, -1), Color.Green);     vertices[i++] = new MyVertexPositionColor(new Vector3(5, 1, -1), Color.Blue);     vertBuffer = new VertexBuffer(device, MyVertexPositionColor.SizeInBytes * vertices.Length, BufferUsage.WriteOnly);     vertBuffer.SetData<MyVertexPositionColor>(vertices, 0, vertices.Length); } 

The first line of code creates VertexDeclaration, which is passed to the video card.

The code in the middle part creates an array containing three MyVertexPositionColors and defines a triangle. Positions and colors are stored in China on each vertex. To create a VertexBuffer based on this array, you also need to specify the number of bytes occupied by a vertex, so the parameter MyVertexPositionColor. SizeInBytes is passed.

The following code draws a triangle from VertexBuffer. For details, see tutorial 5-4:

basicEffect.Begin();foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) {    pass.Begin();     device.VertexDeclaration = myVertexDeclaration;     device.Vertices[0].SetSource(vertBuffer, 0, MyVertexPositionColor.SizeInBytes);     device.DrawPrimitives(PrimitiveType.TriangleList, 0, 1); pass.End(); }basicEffect.End(); 

Before you draw a triangle, you need to pass VertexElements to the video card so that the video card knows how to correctly split the byte stream into useful data.

Custom vertex format

In the second example, you will create a new vertex format that stores location, texture coordinates, and an additional Vector4. This allows you to store four additional values in each vertex. Figure 5-25 shows these two vertices in the byte stream.

Figure 5-25 contains two byte streams of MyCustomVertexFormats

This is a new structure. You need to define three main parts:

public struct MyCustomVertexFormat {    public Vector3 Position;    public Vector2 TexCoords;     public Vector4 Extra;        public MyCustomVertexFormat(Vector3 Position, Vector2 TexCoords, Vector4 Extra)     {        this.Position = Position;         this.TexCoords = TexCoords;         this.Extra = Extra;     }        public static readonly VertexElement[] VertexElements =     {        new VertexElement(0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0),         new VertexElement(0, sizeof(float)*3, VertexElementFormat.Vector2, VertexElementMethod.Default, VertexElementUsage.TextureCoordinate, 0),         new VertexElement(0, sizeof(float)*(3+2), VertexElementFormat.Vector4, VertexElementMethod.Default, VertexElementUsage.TextureCoordinate, 1),     };         public static readonly int SizeInBytes = sizeof(float) * (3 + 2 + 4); }

The first part of the code allows each vertex to store a Position, a Texture coordinate, and a Vector4.

Next, connect them to the vertex shader input. The first Vector3 is linked to POSITION0. Because this is the first data item, you can locate it at the byte 0 (the second parameter ).

The second line of code indicates that Vector2 containing texture coordinates is linked to TEXCOORD0. Position occupies three floating point numbers, so the texture coordinates can be found at byte sizeof (float) * 3 = 12.

The third line of Vector4 is linked to another TEXTURE syntax because it can be used to transmit additional data. Because TEXTURE0 has been used, setting the last parameter to 1 indicates link to TEXTURE1.

Vector4 is after Position and Texture coordinate data, so Vector4 can be found at byte sizeof (float) * (3 + 2) = 20. See Figure 5-25.

Finally, a vertex occupies a total of 36 bytes (Position three floating-point numbers, texture coordinates two floating-point numbers, and Vector4 four floating-point numbers ).

Custom format vertices

The following code creates VertexDeclaration and VertexBuffer that contains three custom format vertices:

myVertexDeclaration = new VertexDeclaration(device, MyCustomVertexFormat.VertexElements); MyCustomVertexFormat[] vertices = new MyCustomVertexFormat[3]; int i = 0; vertices[i++] = new MyCustomVertexFormat(new Vector3(1, 1, -1), new Vector2(0,1), new Vector4(-1.0f,0.5f,0.3f,0.2f)); vertices[i++] = new MyCustomVertexFormat(new Vector3(3, 5, -1), new Vector2(0.5f, 0), new Vector4(0.8f, -0.2f, 0.5f, -0.2f)); vertices[i++] = new MyCustomVertexFormat(new Vector3(5, 1, -1), new Vector2(1, 1), new Vector4(2.0f, 0.6f, -1.0f, 0.4f)); vertBuffer = new VertexBuffer(device, MyCustomVertexFormat.SizeInBytes vertices.Length, ResourceUsage.WriteOnly); vertBuffer.SetData<MyCustomVertexFormat>(vertices, 0, vertices.Length, SetDataOptions.None); 

Each vertex requires one Vector3, one Vector2, and one Vector4.

Define a Vertex Shader that can use a custom Vertex format

Now vertex shader can accept data from POSITION0, TEXCOORD0, and TEXCOORD1, which can be accessed by the following code:

// Technique: CustomVertexShader CVVertexToPixel CVVertexShader(float3 inPos: POSITION0, float2 inTexCoord: TEXCOORD0, float4 inExtra: TEXCOORD1) {    CVVertexToPixel Output = (CVVertexToPixel)0;     float4 origPos = float4(inPos, 1);     float4x4 preViewProjection = mul(xView, xProjection);     float4x4 preWorldViewProjection = mul(xWorld, preViewProjection);     Output.Position = mul(origPos, preWorldViewProjection);    Output.Extra = sin(xTime*inExtra.xyz);         Output.TexCoord = inTexCoord;     Output.TexCoord.x += sin(xTime)*inExtra.w;     Output.TexCoord.y -= inExtra.w;        return Output; } 

How to use additional input is not important, but it is important to make them available in vertex shader. In this example, the 3D position is first mapped to the 2D screen coordinate. Then, the first three floating point numbers of Vector4 are used as the Frequency Modulation factor of the sine function, and the structure is stored in the output structure. The final texture coordinate is moved, and the last floating point number of Vector4 is used to adjust the moving intensity.

Below is the pixel shader, which uses the moving texture coordinates to sample the texture and adds the three values in the Extra variable to the color channel:

CVPixelToFrame CVPixelShader(CVVertexToPixel PSIn) : COLOR0 {    CVPixelToFrame Output = (CVPixelToFrame)0;     Output.Color = tex2D(TextureSampler, PSIn.TexCoord);     Output.Color.rgb += PSIn.Extra.rgb; return Output; }

Draw from vertex this Code sets the effect variable and draw a triangle from the vertex:

effect.Parameters["xWorld"].SetValue(Matrix.Identity);effect.Parameters["xView"].SetValue(fpsCam.ViewMatrix); effect.Parameters["xProjection"].SetValue(fpsCam.ProjectionMatrix); effect.Parameters["xTexture"].SetValue(texture); effect.Parameters["xTime"].SetValue(time); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes){    pass.Begin();     device.VertexDeclaration = myVertexDeclaration;     device.Vertices[0].SetSource(vertBuffer, 0, MyCustomVertexFormat.SizeInBytes);     device.DrawPrimitives(PrimitiveType.TriangleList, 0, 1);     pass.End();}effect.End(); 
Code

XNA code has been written earlier. The following code is the content of the HLSL file:

float4x4 xWorld; float4x4 xView; float4x4 xProjection;float xTime; Texture xTexture; sampler TextureSampler = sampler_state {    texture = <xTexture> ;     magfilter = LINEAR;     minfilter = LINEAR;     mipfilter=LINEAR;     AddressU = mirror;     AddressV = mirror; }; struct CVVertexToPixel{    float4 Position : POSITION;     float2 TexCoord : TEXCOORD0;     float3 Extra : TEXCOORD1; }; struct CVPixelToFrame {    float4 Color : COLOR0; }; // Technique: CustomVertexShader CVVertexToPixel CVVertexShader(float3 inPos: POSITION0, float2 inTexCoord: TEXCOORD0, float4 inExtra: TEXCOORD1) {    CVVertexToPixel Output = (CVVertexToPixel)0;     float4 origPos = float4(inPos, 1);    float4x4 preViewProjection = mul(xView, xProjection);    float4x4 preWorldViewProjection = mul(xWorld, preViewProjection);         Output.Position = mul(origPos, preWorldViewProjection);    Output.Extra = sin(xTime*inExtra.xyz);     Output.TexCoord = inTexCoord;     Output.TexCoord.x += sin(xTime)*inExtra.w;     Output.TexCoord.y -= sin(xTime)*inExtra.w;         return Output;           }CVPixelToFrame CVPixelShader(CVVertexToPixel PSIn) : COLOR0 {    CVPixelToFrame Output = (CVPixelToFrame)0;    Output.Color = tex2D(TextureSampler, PSIn.TexCoord);     Output.Color.rgb += PSIn.Extra.rgb;          return Output;  } technique CustomVertexShader {    pass Pass0     {        VertexShader = compile vs_1_1 CVVertexShader();         PixelShader = compile ps_2_0 CVPixelShader();     }} 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.