Manage shader-shader tutorial

Source: Internet
Author: User
Tags mathematical functions eye vector
Shader tutorial

This section describes how to create your own shader. This is not the most useful shader in the world, but it shows how to get the shader, what tools can be used, and prepare for the shader that may be more exciting in the future.

Let's take a look at Figure 6-4. Your goal is to move forward. It shows the Apple model from the previous chapter, but this time your new shader file simpleshader. FX is applied to it, and another texture is used as some text above it.


Figure 6-4

Simpleshader. FX uses a normal vertex shader, which only converts 3D data, a simple mirror reflection per-pixel technique for pixel shader, pixel shader processes each pixel, computing environment, diffuse, and mirror reflection. Because you cannot plan any special rules for pixel shader, and the only way to display the reflected light effect is to use vertex shader, it is impossible to use the fixed function shader for this calculation. You may ask why the calculation of the mirror reflection component in vertex shader is terrible. Okay, if you have an object with a low polygon, like apple here, or a sphere from the earlier Normal mapping example, it would look terrible to calculate the color composition in vertex shader, because you can only calculate the synthesized color for each vertex. If you look at Apple's wiremap (6-5), you can see that there is only one string of vertices (these points are connected to a line), and all the data between them is not calculated in the correct way, only interpolation can be replaced. However, if a bunch of highlights like in Figure 6-4 are between two vertices, they will not be seen at all. By the way, if you want to render the data in box frame mode, you just need to add the following before you render any 3D data:CodeRow:

 
Basegame. device. renderstate. fillmode = fillmode. wireframe;


Figure 6-5

FX composer

To start with the only one simpleshader. getting started with FX shader, you will be using the free FX composer tool. You can download it (or Google find it) from the developers section on the NVIDIA homepage (www.nvidia.com ).

After you install and start FX composer, you will see the screen 6-6. It shows you several Panel panels and can even be used by artists to test texture textures, shader techniques, modify shader parameters such as color values, special effect intensity, and more. The most important panel for you as a developer is in the middle. It displays the currently selected. FX file (similar to Visual Studio)Source code. As you can see, the. FX file looks very similar to C # Or CPP. It has syntax highlighting areas and many keywords that look similar to C. For example, string is a text string, float is a floating point number, and so on, just like in C. However, there are other types such as float3 or float4x4. The numbers after the float name only indicate the dimension. The vector3 structure in float3 and xNa also contains three floating point numbers (x, y, and z ). Float4x4 describes a matrix and contains 16 float values (4 × 4). It is also in the same format as the matrix structure of xNa. The last variable type you must know is texture. Textures is defined as the Texture type. You must specify a sample to show shader how to use this shader (filter the texture size when used ).


Figure 6-6

Through a column, you can see the variables displayed after the optional semantic values (worldviewprojection, diffuse, direction, and so on. They tell FX composer how to fill and how to use this value. For your xNaProgram, These values are nothing. specifying these semantic values is always very common. It allows you to use shader in other programs, such as FX composer and 3D Studio MAX. It is also useful when you read shader later. Semantic tells you the exact value to be used. In xNa, you are not restricted by applications. For example, you can set the world matrix as a viewinverse reverse field of view, but it will be confusing in a moment, right?

Other Panel panels are not so important to you now, but here is a description of each panel:

  • Use the toolbar on the top to quickly load a shader and save your current file. The last button of the toolbar is to build your shader and display any compilation errors (like Visual Studio) on the log panel at the bottom ). Each time you build a shader, it is automatically saved. You should always use the (or press Ctrl + F7) button whenever possible to ensure that your shader is always compiled and saved.

  • The materials panel on the left shows you a list of all the shader lists that you can currently load into FX composer. A shader carries a small preview ball. When you change any shader, the sphere changes accordingly. The most important button here is "assign to selection", which sets the material to the selected scene panel object.

  • The textures panel displays the texture used by the current shader. If you load some external shader and texture files, they are often not found because they exist in another folder or do not match the. FX shader file. For Bitmap bitmap maps that cannot be loaded, make sure that the texture is loaded. Otherwise, the shader output is usually black and invalid.

  • The properties panel on the right shows all the parameters you can set in the shader (if not, they have been filled by FX composer, such as world and viewinverse ). To make shader work in the same way in your xNa engine, you must also set all these values in the game engine, especially in the 3D world, it depends on the current camera position, viewport, and object matrix. If you do not set parameters, use the default value directly set in the source code of the. FX file. To ensure that all parameters are always valid values, for example, even if the user or engine does not set any color parameters, you should always ensure that these parameters are useful by default, for example, set the diffuse color to White:

    Float4 diffusecolor: diffuse = {1.0f, 1.0f, 1.0f, 1.0f };
  • Finally, the scene panel displays a simple example object like a standard sphere to test your shader. You can also change it toCubeBody, cylinder, or others. In either case, you can even import model files and play them around, but most files cannot work very well. When you import a scenario, the camera in FX composer is always in trouble. I just stick to the standard sphere object and perform all advanced tests in your xNa engine. FX composer 2.0 will be much better at loading and processing custom 3D data and model files. You can use the Collada file and even manage all the shader, which is used by every mesh in your model. If you read this, you can get FX composer 2.0 and use it.
FX file Layout

if you have used OpenGL before, you have to write vertices and fragment shader in person (in DirectX, vertex and pixel shader). You will be glad to hear that. the FX file places all the shader code in one place. there are many different pixel shader statement blocks in the FX file to support multiple target configurations) for example, pixel shader 1.1 corresponding to GeForce 3 is pixel shader 2.0 to support NVIDIA geforce FX or ATI radeon 9X. Has. the function of supporting multiple vertex and pixel shader in the FX file has another useful secondary effect, that is, placing similar shader together and using some public methods and the same shader parameters, this makes shader development easier. For example, if you have a normal mapping shader used for bright metal surfaces and it looks like a metal, you may need another shader with more diffuse appearances for stone, you can place different shader techniques (shader technique) in the same shader file, and then select the appropriate techniques based on the materials you want to display in your engine.

As a typical. FX file example, figure 6-7 shows the planning hierarchy of simpleshader. FX. More complex shader include vertex shader and pixel shaders and more technique techniques. You don't have to create a new vertex shader or pixel shader for each technique; you can combine them in the way you choose. Some shader may also use multiple pass, which means they render everything passed in by 1st pass, and then 2nd pass renders exactly the same data again to add more special effects or layers to the material. Using multiple pass is usually too slow for real-time applications, because the rendering of 10 passes means that in the same shader, your rendering time will be 10 times higher than simply rendering one pass. Sometimes it may be useful to use multiple pass: use multiple pass to implement some special effects that are not applicable to the shader command restrictions in other ways, or if you can use the 1st pass, in addition, the post screen shader modified in 2nd pass can achieve a much better effect. For example, fuzzy computing overwrites a large number of commands. To get a good fuzzy result, you must mix many pixels to calculate each fuzzy point.


Figure 6-7

For example, a blur in the range of 10x10 will overwrite 100 pixels to read the command. It sounds bad if you have 1 millions or 2 millions pixels on the screen and want to blur them. In this case, only blur in the X direction and then get the result. Second pass will blur it with a series of 10 pixel read commands in the Y direction, which is much better. Now your shader is running five times faster and looks almost the same. From a background image of 1600x1200, the image is first sampled as 400x300, and then blurred. You even have better execution efficiency, this will bring you a 16-fold improvement in performance (which is surprising now ).

Chapter 8 talks about post screen shader. But first write the simpleshader. FX file. As you can see in Figure 6-7, the shader file uses a considerable number of shader parameters. Some parameters are not that important, because you can also set the hard-coded material for the shader directly, and this method allows you to change the color and appearance of the material in the engine, and you can use shader for many different materials. Other parameters, such as matrices and texture, are very important. If the engine does not set these parameters, you cannot use shader. In the engine, material data, such as mcolor and texture values, should be loaded when you create a shader, in addition, the world Matrix world matrix and light ction illumination direction should be set for each frame, because the data may change for each frame.

Parameters

If you want to keep up with the pace of shader creation, you may want to open FX composer now and open a blank. FX file. Select File-> New to create a new blank. FX file and delete all content. You will open a completely blank. FX file.

The first thing you may want to do is to quickly recall the situation of this file. When you open it later, add a line of description or comment at the top of the file:

 
// Chapter 6: writing a simple shader for xNa

As you can see in the simpleshader. FX file overview, you first need several matrices to convert 3D data in vertex shader. This may be the worldviewproj matrix, world matrix, and viewinverse matrix. Worldviewproj matrix combines the world matrix, which places the object you want to render in the correct position in the world. View matrix converts 3D data to your camera (view space) view space (as shown in Figure 5-7 in the previous chapter). Finally, projection matrix places the view space point in the correct position on the screen. This matrix allows you to use only one Matrix Multiplication operation to quickly convert the input location to the final output location. World matrix is then used to perform operations in the 3D world, such as computing world normal and lighting calculations. Viewinverse is usually used to obtain more information about the camera position. This information can be extracted through the matrix obtained through 4 th rows:

 
Float4x4 worldviewproj: worldviewprojection; float4x4 world: World; float4x4 viewinverse: viewinverse;

Each value of these matrices is of the float4 × 4 type (this is the same data type as the matrix in xNa). To support applications such as FX composer or 3D Studio MAX, you 'd better use shader semantic to describe these values. This is very important when model shapers want to see what the 3D model with shader looks like. The cool thing is that the model looks absolutely the same in both FX composer, 3D Studio, and your engine. This fact can save you a lot of time in game development, especially reducing the test process to correctly obtain all the appearances of 3D objects.

It is time to save the file. Press the build button or Ctrl + F7 to enter the name of a new shader. Name simpleshader. FX and put it in your xnagraphicengine content folder. In this way, you can use it quickly in xNa. After saving, FX composer will tell you "there were no techniques" and "Compilation failed" in the source code tasks panel ". Well, now you will implement these technique techniques, but first you need to implement the remaining parameters. Because your shader uses a light to brighten your apple (6-4), you need a light, which can be a light source or a direction light source. It is a bit more complex to use a point light source, because you have to calculate the light direction for each individual vertex (if you like it, you can even calculate the light direction for each individual pixel ). If you use a spotlight, computing becomes even more complex. Another problem with point light sources is that they often fade with distance. If your 3D world is large, a lot of light will be needed. Directional Light is much simpler and useful for quickly simulating the sun in an outdoor environment. For example, you can create a game in the following chapters.

 
Float3 lightdir: direction <String object = "directionallight"; string Space = "world"; >={ 1, 0, 0 };

Except for ambient light with General brightness added to all materials, the following types of light sources are usually used in the game:

    • directional lights: the most simple type of light source, which is easy to implement. You can directly use the light source direction in the shader for internal light source calculation. In the real world, no direction light exists. Even the sun is a huge distant point light source, but it is easy to implement outdoor lighting scenarios.

    • point lights: it is not very difficult to calculate a single point light source, however, you have to calculate the degree of attenuation of the light source over a certain distance. If you need the direction of the light source, it also needs to be calculated in the shader, which will reduce the speed. However, the main problem with the point light source is that for any scenario larger than the room, you need more than one point light source. 3D shooting games usually use techniques to limit the number of point light sources seen at the same time. However, for outdoor games, such as strategy games, a directional light source is used, it is much easier to add only a few special lighting effects and use a simple light source for some special effects.

    • spot lights: A spot light is the same thing as a spot light source. It only points to one direction, and because of light cone computing, only illuminating a small area. The spotlight is a bit more difficult to compute, but if you can ignore the difficult part of the illumination computing (for example, when using a complex Normal mapping shader normal ing shader with multiple spotlights ), it can be much faster than using a point light source. Currently, you can only make conditional statements such as "if" in Shader Model 3.0. The previous versions of shader also support these statements, however, all the "if" statements and "for" loops are just unlocked and expanded, and you cannot benefit greatly from the execution efficiency in the Shader Model 3.0.

Now the code above looks a bit more complicated; the first line is almost the same as the matrix. Float3 clearly indicates that you use a vector3, and direction tells you that lightdir is applied as a directional light source. The brackets define an object and space variable. These variables are called annotations (comments), which regulate the use of these other program parameters, FX composer or 3D Studio Max. These programs now know how to use this value, and they will automatically allocate it to light objects that may already exist in the scene. You can load the shader file in a 3D program in this way, without having to manually connect all the light, material settings, and textures to work immediately.

Next, you will define the material settings. You will use the same material settings as the standard DirectX material. You can use similar shader in programs such as 3D Studio Max in this way, or the DirectX material in the past, and all the color values are automatically and correctly applied. In the engine, you usually set the environment (ambient) and scattering (diffuse) colors. Sometimes you also specify different shininess values for the mirror color calculation ). You may notice that you no longer use any commentary here-you can also specify it here, but even if you do not define the annotation comment, the material settings work well in both FX composer and 3D Studio Max. The engine only uses the default value, in case you do not want to rewrite the default value for unit test in the future.

 
Float4 ambientcolor: ambient = {0.2f, 0.2f, 0.2f, 1.0f}; float4 diffusecolor: diffuse = {0.5f, 0.5f, 0.5f, 1.0f}; float4 specularcolor: specular = {1.0, 1.0, 1.0f, 1.0f}; float shininess: specularpower = 24366f;

Finally, your shader needs a texture that looks a little more interesting than displaying a gray sphere or apple. Instead of using an apple texture, just like the original apple from the previous chapter, you will use a new test texture, which will become more interesting when you add Normal mapping in the next chapter. The texture calls marble. DDS (6-8 ):

 
Texture diffusetexture: diffuse <string resourcename = "Marble. DDS "; >; sampler diffusetexturesampler = sampler_state {texture = <diffusetexture>; minfilter = Linear; magfilter = Linear; mipfilter = Linear ;};


Figure 6-8

Resourcename comment (annotation) is only used in FX composer, and it automatically loads marble from the same folder where the shader is located. DDS file (OK marble. the DDs file is also in the content folder of xnagraphicengine ). This example only specifies that you want to use linear filtering for linear filtering of textures.

Vertex input format

Before writing vertex shader and pixel shader at last, you must specify the method in which vertex data is transmitted between the game and vertex shader, which is processed as the vertexinput struct. It uses the same data as the xNa vertexpositionnormaltexture struct and is applied to the Apple model. The worldviewproj matrix defined earlier converts the position of Apple in vertex shader. Texture coordinate is only used to obtain the texture coordinates of each pixel you will render in pixel shader later, and the normal (normal) value is required for illumination calculation.

Always make sure that your game code and shader use the same vertex input format. If you do not do this, the error data may be applied to the texture coordinates, or the vertex data may be missing, rendering a mess. The best practice is to define your own vertex structure in the application (see tangentvertex in the next chapter) and then define the same vertex structure in the shader. Before your game code calls the shader, you also need to set the vertex declaration that describes the plan of the vertex structure. In the next chapter, you can find more details about this.

 
Struct vertexinput {float3 pos: position; float2 texcoord: texcoord0; float3 normal: normal ;};

You must also define the data transmitted from vertex shader to pixel shader in a similar way. It may sound unfamiliar at first. I promise you that this is the last thing you have to do, that is, to get the shader code. If you look at Figure 6-9, you can see the path of 3D geometric experience, from your application content data to Shader, the shader uses graphical hardware to end on the screen in the way you want. Although the entire process is more complex than using a fixed function pipeline in the old days of DirectX, it allows you to optimize code for each vertex, and you can, you can change the final pixel color when the vertex is processed in vertex shader or when it is rendered on the screen.


Figure 6-9

The vertexoutput struct of your shader transmits the vertex location to be converted, the texture coordinate of the texture to be applied, a normal and halfvec vector, in order to directly calculate the color of the mirror reflection in pixel shader. Both vectors must be passed as texture coordinate, because the data transmitted from vertex to pixel shader can only be position, color, or texture coordinate data. But that's okay. You can still use the same form of data in the vertexinput struct. It is important to tell FX composer, your application, or any other program that uses the shader to use the correct semantics (Position, texcoord0, and normal) in the vertexinput struct.

Because you have defined the vertexoutput struct and it is only used internally by the shader, you can put everything you want here, but you should keep it short as much as possible, you are also limited by the number of texture coordinate values of pixel shader that you can ignore (4 in pixel shader 1.1 and 8 in pixel shader 2.0 ).

 
Struct vertexoutput {float4 pos: position; float2 texcoord: texcoord0; float3 normal: texcoord1; float3 halfvec: texcoord2 ;};
Vertex shader

Vertex shader captures vertexinput data and converts it to the screen position for pixel shader. pixel shader finally renders the output pixels for the vertices of each visible polygon. The first few lines of vertex shader usually look very similar to every other vertex shader, but to use it in pixel shader, you often estimate the number at the end of vertex shader. If you are using pixel shader 1.1, you cannot do something like normalizing vectors or execute complex mathematical functions such as power ). However, even if you use pixel shader 2.0 (as you do for this shader), you may want to estimate some values in advance to increase the speed of pixel shader, pixel shade is executed by each separately visible pixel. Generally, you have fewer vertices than pixels, And you can perform complex calculations on vertex shader to speed up the execution efficiency of pixel shader.

 
// Vertex shadervertexoutput vs_specularperpixel (vertexinput in) {vertexoutput out = (vertexoutput) 0; float4 Pos = float4 (in. POs, 1); out. pos = MUL (Pos, worldviewproj); out. texcoord = in. texcoord; out. normal = MUL (in. normal, world); // eye POS float3 eyepos = viewinverse [3]; // world POS float3 worldpos = MUL (Pos, world ); // eye vector float3 eyevector = normalize (eyepos-worldpos); // half vector out. halfvec = normalize (eyevector + lightdir); Return out;} // vs_specularperpixel (in)

Vertex shader uses the vertexinput struct as a parameter, which is automatically filled and passed through the shader technique defined at the end of the. FX file from 3D application data. The important part here is the vertexoutput structure, which is returned from vertex shader and passed to pixel shader. The data is not transmitted as pixel shader by, but between each single polygon point, all values are replaced by interpolation (6-10 ).


Figure 6-10

This process is a good thing for any position and color value, because the output looks much better when the value is inserted correctly. However, if you use the normalized vector, it will be messed up by GPU's automatic interpolation processing. To correct this, you must re-Normalize the re-Normalize vector in pixel shader (6-11. Sometimes he can be ignored, because the processing product is invisible. If it weren't for your mirror reflection, the calculation per pixel would be visible to every low polygon object. If you use pixel shader 1.1, you cannot use the normalize method in pixel shader. You can use an auxiliary cube ing instead, which contains an estimated normalized value for each possible input value. For more details, see normalmapping shader effect and parallaxmapping shader effect in the following chapters.


Figure 6-11

If you take a quick look at the source code again (or if you are writing your first shader in person), you can find that you are getting started with the calculation of the output position on the screen. Because all Matrix Operations expect a vector4, you must convert the input value of your vector3 to vector4 and set the W component to 1 to obtain the worldviewproj matrix (Transformation means moving the matrix).

The subsequent texture coordinates are ignored by pixel shader, which you are not interested in. You can, for example, multiply the texture coordinates or add an offset. For details ing or water shader, there are different multiplication factors or offsets. Sometimes it is helpful to copy texture coordinates and apply them multiple times in pixel shader.

Every normal vector from the apple model is then converted to the world space. It is important when you rotate the Apple model around. Then all the normal is rotated, and the light looks incorrect, because the lightdir value does not know how much each model is rotated, And the lightdir value is stored in the world space. Before applying your vertex data to world matrix, it is still in the so-called object space. If you like this (for example, shaking an object around or stretching it in the direction of a target ), it can also be used for several special effects.

The last thing you need to do in vertex shader is to calculate the bisector vector between light ction and eye vector, which helps you calculate the mirror reflection color in pixel shader. As I have said before, vertex shader here calculates it more efficiently, instead of re-calculating this value for each vertex over and over again. The bisector vector is used for Phong shading and generates a mirror highlight (6-12) when looking at an object from a direction close to light ction ).


Figure 6-12

Pixel shader

Pixel shader is responsible for outputting certain pixels on the screen. To test, you only need to output any color you like. For example, the following code only outputs red color for the rendered pixels on each screen:

 
// Pixel shaderfloat4 ps_specularperpixel (vertexoutput in): Color {return float4 (1, 0, 0, 1);} // ps_specularperpixel (in)

If you press build now, shader still does not compile because you have not defined a technique. You only need to define the following technique techniques to make the shader work. The syntax of technique is always similar. Generally, you only need one pass (P0 called here ), then you define the vertex shader and pixel shader used by specifying the vertex shader and pixel shader versions of the application:

 
Technique specularperpixel {pass P0 {vertexshader = compile vs_2_0 vs_specularperpixel (); pixelshader = compile ps_2_0 ps_specularperpixel ();} // pass P0} // specularperpixel

Now you can finally compile the shader in FX composer, and you should see the output shown in 6-13. Are you sure you have allocated the shader option on the FX composer scene Panel (click sphere, click simpleshader. FX material on the materials panel, and click "Apply material ).


Figure 6-13

Next, you should place the marble. Dds texture on the sphere. With the help of the tex2d method in pixel shader, pixel shader expects a texture sample as the first parameter, and texture coordinates as the second parameter. Use the following code to replace the return float4 line from the previous code to construct your 3D object:

 
Float4 texturecolor = tex2d (diffusetexturesampler, In. texcoord); Return texturecolor;

After compiling the shader, you should now see the results shown in 6-14. If you only see one black sphere or you cannot see it at all, you may not have loaded marble. DDS texture (look at the textures panel and make sure the texture is loaded as described previously; you can click diffusetexture In the attribute and load it yourself ).


Figure 6-14

The last thing you have to do is calculate the diffuse scattering color and specular mirror reflection color components based on lightdir and halfvec values. As mentioned, you also want to make sure that in the pixel shader, the human factors are removed by re-normalized.

 
// Pixel shaderfloat4 ps_specularperpixel (vertexoutput in): Color {float4 texturecolor = tex2d (diffusetexturesampler, in. texcoord); float3 normal = normalize (in. normal); float brightness = dot (normal, lightdir); float specular = POW (dot (normal, in. halfvec), shininess); Return texturecolor * (ambientcolor + brightness * diffusecolor) + specular * specularcolor;} // ps_specularperpixel (in)

Diffuse color scattering colors are obtained by calculating the re-normalized normal (in World Space, see vertex shader discussed earlier in this chapter) and lightdir point multiplication. lightdir is also in World Space. If you perform any matrix multiplication, dot multiplication, or cross multiplication calculation, they are always important in the same space, otherwise the results will be greatly incorrect. The dot multiplication is only a method for calculating the scattering color. If lightdir and normal point are in the same direction, it means that the normal is pointing to the sun, and diffuse color should take the maximum value (1.0); if normal is at 90 degrees, point multiplication returns 0, and diffuse component is zero. To see the sphere from the dark side, the color of the ambient color environment is added. When no scattering or the reflected light is visible, the light also illuminates the sphere.

The specular color mirror reflection color is calculated by the Phong formula using the normal line and the point multiplication of the split line. The split line is calculated in vertex shader. Then you get the power of this result from the shininess factor to greatly reduce the area affected by the mirror highlight. The higher the value of shininess, the smaller the value of highlight (if you want to see it, fine-tune the value of shininess up and down ). You can add all the color values at the end of pixel shader, multiply the texture color by the result, and return everything to be sprayed on the screen (6-15 ).


Figure 6-15

Now you have finished the work on the shader. The stored shader file can now even be used by other programs such as 3D Studio Max to help artists witness how 3D models will look in the game engine. Next you will execute the shader in the game engine.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.