http://blog.csdn.net/iaccepted/article/details/45826539
The previous period of time has been to get an indoor scene, first completed the render, the effect is also possible. Then add shadow to it to make it more realistic. Here are the main records of the problems encountered in the process of doing.
1. When the scene is imported, because the scene is relatively large (200M), so the loading of such a large scene on iOS will be frequently memorywarning, and then will be killed by the system. The solution to this problem is to achieve compression by changing the data type. The coordinates of the vertex double cannot be changed, if the change will seriously affect the accuracy of the scene. Here is mainly to change the type of normal and UV, in fact, in the normal range of accuracy, normal and UV do not need a double or float to store, you can use a short to store, here the normal and UV plus an offset value (such as multiplied by 10000), This kind of can save a lot of storage space, preliminary testing, loading the entire scene memory occupies about 300M multipoint, now the ipad is completely affordable. Then, when the shader is really used, only need to restore the data on the line, so there is almost no loss of efficiency but greatly reduced the memory footprint.
2. Different mesh materials in the scene are not the same, loading can be the same material mesh together, so that when drawing, you can easily give material information. I started writing classes to load scenes and materials, but finally, for reliability and subsequent support for model loading in different formats, we chose to use the Open Source Library Assimp to take charge of the model loading. Find the source code on GitHub to compile, compile the version that supports armv7,arm64 and then use it in iOS. The use of the process needs to import Libz, libc++, libstdc++ and so on the framework. You can then use the library on iOS.
3.shadow mapping principle: Simple is to first render the depth of the scene information, that is, from the location of the light source to place the camera to render, of course, here only to obtain depth information, in order to facilitate the use of the depth of information rendered to a depth texture, and then the depth of the texture into the shader, In the normal camera position rendering, in shader each point after the model conversion depth value and depth texture in the depth value, if the current depth is greater than the depth of the texture in the value, then it is obvious that the point in the shadow, need to darken processing, and vice versa.
4. After the texture map is generated, it is passed into the shader, and the problem is that the depth information is always wrong at the beginning, and the depth texture is completely messed up by rendering it normal texture map. Later on the OpenGL website found the following sentence:
If a texture have a depth or depth-stencilimage format and has the depth comparison activated, it cannot is used with Anorm Al Sampler. Attemptingto do-results in undefined behavior. Such textures must is used with Ashadow sampler.
On the Windows platform did not encounter this problem, can normally contain depthimage format texture normal paste out, but on the iOS platform using OES2.0 does not work properly, as if there is an undefined phenomenon.
5. The above texture re-accepted in the form of Sampler2dshadow, and the texture is still wrong, the reason for this error is very good positioning, mainly to forget to transfer the coordinate space to the texture space (0-1), through the SHADOWMVP matrix transformation coordinates x and Y values are in the [- 1, 1] interval, so the conversion using a bias matrix will convert [-1, 1] to [0, 1].
6. After the conversion is completed can use the Shadow2dproj function to complete the comparison of depth values, of course, here can also compare themselves, but when they compare the value of x and Y divided by the W component and then take out the depth value, which is actually shadow2dproj in the implementation of the way, This function will first divide the X and Y components by the W component and then remove the depth value compared to the value in the depth texture, which returns 1 if the depth test succeeds (that is, the depth value is less than the depth value in the depth texture) and returns 0 if the depth test fails. So that we can do the shadow processing.
7. After the above can be correct, shadow effect should be able to do out, but may appear z-fighting problem, and may be quite serious, if we do not use Shadow2dproj to compare the depth value, Then we can add a small error tolerance when comparing the depth value, of course, the size of this value to be tested in person to see the effect can be selected, because of different scenarios, different machine This value difference may be very large, I here a simple try, to 0.006 when the effect is better. But if we use shadow2dproj, then the comparison function is not our own implementation, then we can not add this small error tolerance. At this time we can use Polygonoffset to avoid z-fighting problems when drawing, and it is really great to try this effect.
These are some of the problems encountered during the shadowmapping implementation.
Although the principle of shadowmapping is very simple, but it is true to achieve a better effect of shadowmapping is still a bit troublesome.
The next step is to implement Ssao.
Shadow mapping realizes dynamic Shadow realization record "turn"