The modern computer graphics pipeline renders images in a way that deals with these two issues:
1 3D How geometric elements in the world are projected into a zero-dimensional entity, which corresponds to which pixels of the screen
2 according to the information (illumination, normal vector, map), how to set the color of each pixel point
Based on these two problems, we have the vertex shader and fragment shader.
The content of this study is a question in question 2nd. This problem arises when we have the map, we have the Uvs, we have the position of the fragment (not the pixel position, but the floating point position with a certain precision), how to choose the color according to this part of information.
All techniques begin with the transformation of imperfect parts of the present, and the concept of mipmap is so. Anyone engaged in computer graphics, will inevitably encounter this problem, if you do not learn mipmap, do not learn three linear filter, also will come up with a similar solution, therefore, these two things is the inevitable development of technology, is the natural appearance, and not for the technology and technology of the geek gadgets.
First, based on the position of the current fragment and the UV value of the adjacent 3 vertices (considered a triangle), we can calculate the interpolated Uvs where the fragment is located. According to this UV, mapping to the pixel coordinate space of the map will inevitably result in a fractional two-dimensional coordinate with floating point. To take it for granted, we think that the pixel color closest to this coordinate is the color of this fragment. So we have a bit of sampling.
However, this effect is not perfect, because all the computed two-dimensional coordinates in the pixel +-0.5 range of all fragments, will use the same color, when the triangle is magnified (that is, the camera is closer or the camera FOV is smaller), the screen will have multiple pixels to the same map color. This is a fascinating mosaic effect, and this is a problem for both the PS host and many old game consoles (Nintendo until NDS). They can only use point sampling.
It was obviously too rough, so the wise man thought of a way to calculate the position of the coordinate space of the floating-point map, take the surrounding 4 pixels, and calculate a mixed color by the weighted distance (this is the legendary image convolution of unknown sensation). As a result, the image is less coarse and the transition becomes smoother, which is called linear sampling.
There seems to be nothing to optimize, after all, the map resolution is placed there. However, linear filtering can exhibit imperfections when the triangles are smaller. When the triangle is small (that is, the map is reduced), this way to calculate the color will bounce, because the map is large, the information is too large, while a large amount of information is compressed in a relatively small space, the pixel changes are small, the color changes will be dramatic, refer to the oscilloscope, if you enlarge the time range of the axis, This will make the change in absolute screen distance more intense. This will produce the following effect, the image in the distance in fact has not seen any useful information, by the dramatic transition masked:
On this point, low-pixel images have an advantage. Because the pixel is low, the amount of information is not so large, the transition in the limited space will not be so dramatic (in fact, a low-resolution image can be enlarged, in the reference to large-resolution similar images, in any direction, the low-resolution image amplification After the color changes will be more soft).
So the wise man came up with a way to save a different resolution version of the same image, and when the size of the triangle on the screen is much smaller than the image, use the low-resolution version, which uses the lower resolution, depending on the size of the triangle on the screen and the UV range it uses. Since the computer processes twice times the relationship speed faster, generally used as twice times the relationship of the image, so, must specify the length and width of the image is a power of 2.
This is the origin of mipmap, it can achieve the following effect in the distance, the perfect solution to the intense color shock:
MipMap, bilinear all have, this picture static look, also not too many can be picky.
However, when the camera is moving, the same triangle is getting larger or smaller to do the gradient scaling, there is a very noticeable change in the mipmap switching the image at the pro point of the different resolution images:
Where the top is red, when the camera moves forward, it suddenly becomes clear, and the process is obvious and looks vainly disobey.
So, we have the last three linear filters:
Calculated to choose Mipmap's credentials, must be a floating-point number (not so coincidentally is an integer, even if it is a small number of decimal points), then we have the previous mipmap and the next resolution of the mipmap are bilinear interpolation, and then according to this mipmap credential, The two interpolation results are then interpolated to get an intermediate number.
This solves the problem of mutation in the dynamic situation, when the camera is approaching, it is slowly becoming clear.
Sum up:
Point sampling, mosaic
Linear sampling, long distance hemp point
There is no difference between linear sampling and three linear static graphs in mipmap, the difference is that they are moving and the third-line is gradual.
Mipmap and tri-linear filtration