Z buffer and W buffer Protocol

Source: Internet
Author: User

Http://www.csie.ntu.edu.tw /~ R89004/hive/hsr/page_2.html

 

Almost all current 3D display chips have Z buffer or W buffer. However, we can still see that some people have some basic questions about Z buffer and W buffer, such as the usage of Z buffer, the difference between Z buffer and W buffer, or some issues about precision. The purpose of this article is to introduce the Z buffer and W buffer.

What is the use of Z buffer and W buffer? Their main purpose isRemove hidden area, That is, Hidden surface elimination (orLocate the visible area, Visible surface detemination, which means the same ). In the 3D border, if there are more than two triangles, a triangle may appear to cover another triangle. This is a very clear phenomenon, because near things will always cover the hidden area (these triangles are not transparent ). Therefore, when creating a 3D scene, you must handle this problem if you want to confirm the result.

However, this problem is quite difficult, because it refers to the relationship between the triangle and not just a triangle. Therefore, when removing hidden faces, you need to test all the triangles in the scene. This makes the problem quite simple. In addition, the triangle is often not covered, but only a part is covered. Therefore, this makes the problem more difficult.

The simplest method to remove hidden objects is Painter's algorithm ). The principle of this method is very simple, that is, the first thing to write, and then the nearest thing. In this case, the recent things will naturally crash. This method is regarded as an algorithm by developers 」. The following example is an example:

In the upper part of the queue, the highest color of the queue, so it is the first. Then there is a colored triangle, followed by a gray square. The effect of removing hidden surfaces can be obtained by sorting the values in the descending order. Therefore, as long as the triangle surface in the 3D scene is sorted near the distance between the observer and the start of the triangle surface, it should be enough to confirm the result.

However, in reality, there is no such ideal. In a 3D scene, a triangle surface may be located in some places and near some places, because the triangle surface has three points, the distance between these three points and the observer is usually different. So what sort points do you want to sort? Or is it sorted in the center of a triangle? In fact, whatever sort by category, there may be problems. The following figure shows a scenario where the "expert speech algorithm" cannot be solved:

In the above section, the three triangles cover each other. Therefore, no matter what sort order is used to describe, no positive result can be obtained. In addition, this method cannot handle the intersection of the triangle surface.

Of course, if such a strange situation does not occur in a definite scene, the "Family evolution algorithm" is generally usable. However, it still has a big problem, that is, its efficiency is not good. First, the artist's algorithm needs to perform a sort operation on all the triangles in the scene. The best sorting algorithm also requires O (N log n. That is to say, (in general) if the number of triangles changes from one thousand to 13.3, the time required for sorting will change to about times. In addition, because we need to accelerate all the triangles in the scene, we do not need to use special hardware for acceleration. In addition, this method also has a big problem, that is, it will spend a lot of time coding some of the parts that will be hidden, because each pixel in each triangle needs to be scaled out. This will also make efficiency worse.

If the scene is not dynamic and only the observer changes, there is a way to speed up the sorting. A common method isBinary space paritioning(BSP ). This method requires the establishment of a dynamic structure for the scenario in advance. After the establishment of this structure, no matter the location and angle of the observer, the correct sequence can be quickly found. In addition, BSP splits the triangle surface to process the situation where the three triangles cover the opposite side.

However, it takes a lot of time for the BSP structure to be established, so it is unlikely that it will run on time. Therefore, it can only be used in the dynamic parts of the scene, and the dynamic parts still need to be sorted separately. In addition, BSP usually needs to split the triangle and increase the number of triangles. In addition, BSP still cannot solve the problems that need to be resolved to the hidden pixel.

Another method to remove hidden planes is to take pixel as a single bit rather than a triangle as a single bit. The simplest method was proposed by Catmull in 1974, that isZ buffer(Or elseDepth buffer). This method is very simple, and it is easy to use specially designed hardware. Therefore, after the memory capacity is no longer a problem, it becomes very popular.

The principle of Z buffer is very simple. In addition to the frame buffer that stores the final result, the 3D scene is stored in the Z buffer. The Z buffer records the distance between each pixel and the observer on the frame buffer, that is, the Z value. Before starting the scene scenario, set all values in Z buffer to unlimitedly writable. Then, when the triangle is located, calculate the Z value of pixel for each pixel of the triangle and compare it with the Z value stored in the Z buffer. If the Z value in the Z buffer is large, it indicates that the pixel to be merged is closer than that in the limit. Therefore, it is necessary to update the Z value in the Z buffer at the same time. If the Z value in the Z buffer is small, it indicates that the pixel to be merged is larger than the pixel value in the current frame buffer, so there is no need to parse it, you do not need to update the Z value. In this way, you can use any order to describe these triangles, and then you can get a correct response. The following example is an example:

In the upper limit, the triangular area of the gradient appears first, but because the Z buffer is used, therefore, the remaining colors will only cover the hidden parts, but not the recent ones. This shows the effect of Z buffer.

In fact, the number of words that can be stored in the Z buffer can be limited to a certain extent, so the Z value is usually reduced to 0 ~ 1. Therefore, when creating a 3D scene, you need to limit the Z value that may appear in a certain region. Generally, two planes are used in parallel with the projection plane to cut all the triangles that exceed the two planes. These two planes are usually divided into Z near and Z far, indicating the plane near the nearest and the plane of the nearest plane. The Z value in the Z near plane is 0, and the Z value in Z far is 1.

In terms of efficiency, the Z buffer is not necessarily faster than the "primary algorithms. However, it is simpler than the limit. In addition, its efficiency does not have much to do with the triangle data, but is related to the pixel data of the triangle. Therefore, it is easy to design a specific 3D hardware to do this operation, instead of the CPU. However, it is not very important to compile the external memory units required by Z buffer today. So now all 3D display chips use Z buffer.

However, Z buffer is not completely empty. A big problem lies in precision. If two triangles are very close, and one of them is completely before the other, you should only see one triangle. However, if the precision of the Z buffer is not accurate, the Z value of each pixel of the two triangles may be very close. In addition, the calculated Z value will have a margin difference. Therefore, it is very likely that some pixels are not covered. This situation implies thatZ fighting. In the lower part, the shadow of the ball on the ground is an example:

To avoid this problem, we must avoid a triangle that is too close and close to parallel in the scene. The general scene does not show this situation. However, the precision of Z buffer is not the same. In the next section, we will explain this question more clearly.

 

I previously made a rough description of the principle of Z buffer. It seems that Z buffer is an ideal technique. However, in reality, there is a big problem in the Z buffer on memory, that is, the question of precision.

As mentioned in the previous article, the Z fighting situation generated by two very close planes is rare and can be easily avoided. Of course, we can still see this situation in some cases. However, the most serious issue of Z buffer is the portion of the operator's observer's operator's attention. If the precision of the Z buffer is not accurate, and the scene is pretty good, something strange will appear in the east. The following example is an example:

Z aliasing No Z aliasing

Of course, the above example is a better case than the half terminal. In fact, in general, there is no such dynamic Z aliasing image. However, I believe that many of you have read similar situations in some of the largest event zones.

Why is there such an image? This starts from the Z buffer structure. As mentioned in the previous example, the Z value is limited to 0 ~ 1. Use a fixed number to represent it. For example, a 16-bit Z buffer may use 0 ~ 65535 (a 16-bit metadata can be used to represent the 0 ~ The Z value between 1.

If the distribution of the Z buffer in the eye space is intrusive, that is, the distance between each number is equal, the precision should be high. Because the hypothetical observer can see the east of a kilometer, and each interval is about 1.5 centimeters. If you use a more precise number (such as a 24-digit number), the precision will be higher. However, Z buffer is not intrusive in eye space. It is imperative in projection space.

If you think these sounds like alien words, it's time to "flip" these alien words. First, let's take a look at the example:

The upper layer is a case where the eyes are projected transparently, and the upper layer is used to look at a plane in the scene. The plane close to the eye (with a highlighted color on it) represents the projection plane, that is, the blurred screen in 3D embedding. The colored dot plane is projected to pixel on the screen. They are of equal distance. However, pay attention to the pixel of these "equi-distance", which reflects the Z value (that is, the gray points on the Z-margin ), it is not an equal distance. In fact, the longer the eyes are, the larger the distance on the Z branch.

This is a clear quality of projection. In the case of transparent projection, the larger the number of things, the smaller the number of things, it will become larger than growth. Therefore, although the triangle surface is a plane, its Z-value transition on each pixel is not a progressive change. Therefore, it is impossible to use implicit interpolation to calculate the Z value of pixel in the triangle. However, to calculate the Z value on each pixel, a division operation is required, and division is a very complex and time-consuming operation.

An early indicator chip could not render a splitter on the Z buffer. Therefore, one method is to store the Z value of eye space in Z buffer instead of the Z value of projection space. In this case, the value of Z in projection space becomes intrusive, you can simply calculate the Z value of pixel in the Triangle using implicit interpolation. This is also the current zbuffer design for all display chips.

However, the Z value in projection space is as shown in the preceding figure. It has an important feature: the distance between the Z values of the eye space mapped by it is larger in the larger area. Therefore, if the precision of the Z buffer is determined by eye space (this is intended), it will become an uneven distribution. The closer the observer is, the higher the precision. The precision change determines the position of the Z near plane and the Z far plane. The closer the Z near plane inspector is and the closer the Z far plane inspector is, the greater the precision, that is, the precision of the region will be worse.

In the first two sides of the plane, the position of the Z-far plane is the same, but the position of the Z-near plane on the left is, the position of the znear plane in the right direction is nearly one thousand times. Therefore, the zaliasing image is generated in the left-side Navigation Pane, but the right-side Navigation Pane does not.

Therefore, to avoid the current Z aliasing image, the plane may pull the Z near plane closer to the Z far plane. However, in reality, in many cases, such design cannot be allowed. For example, in a scene, players may see things on a 50-centimeter-hour table, while they may see a large base out of the window. Therefore, the Z-near plane cannot be set to a ratio of 50 centimeters, but the Z-far plane gets a kilometer again. Based on the 16-bit Z buffer, the distance (that is, a kilometer away) will reach 30 meters, that is, if the two pixels are less than 30 meters away, Z buffer will not be able to tell the exact sequence! Its precision at the Z near location (that is, the 50-cent location) is 0.0000076 m higher. This shows that the precision distribution is so uneven and not suitable. If you use a 24-bit Z buffer, the situation will be improved to a certain extent, and the precision at one kilometer's location will be increased to about 12 public points. This is also why the 24-bit Z buffer rarely shows Z aliasing.

However, even the 24-bit Z buffer is not ideal. In the preceding example, if the Z-near plane is moved to a 10-cent location, the precision at the location will be reduced from 12-cent to 60-cent. Some people may think that, in a kilometer-long place, can someone tell 60 or 12 centimeters? However, the problem is that when the distance between the two large planes is less than 60 centimeters, Z buffer cannot distinguish the positive sequence, it is possible that a plane is shown in this box, and the plane in the next box is shown as another plane. If the color difference between the two planes is quite different, it will generate an image that anyone will easily notice.

Some visualization chips use some methods to solve this problem. A simple idea is to use floating point data in Z buffer instead of fixed point data. After a specific design, floating point data can be near a specific number to provide greater precision (usually near 0 ). The General Z buffer requires a higher precision near Z far. Therefore, the Z buffer can be expressed as 0 in the Z far plane, while the Z near plane is expressed as 1. In this way, a higher precision can be obtained. However, floating point data is more difficult to predict. Especially in the Z buffer calculation, addition and comparison operations are often required, which will be much more difficult than the operation of the fixed number.

Another method is non-intrusive Z buffer. For example, you can cut the Z buffer into many small zones, and each small zone is a general attention Z buffer. However, you can allocate more small areas in the region to improve the precision. This is also a way to solve the problem of precision.

In fact, to solve the Z buffer precision problem, the simplest method is to insert attention content in eye space. However, as mentioned earlier, attention in eye space is not necessarily attention in projection space, so it will need an external parser. However, there is a way to avoid using delimiters. Instead, you only need the "Countdown" function, which is simpler than the complete one. This method is to insert the attention content of Z in projection space with a high precision. The result of each inner insertion is calculated by the inverted number, that is, the W value of the calculated value. The precision of the W value can be reduced because its distribution in the eye space is average. Finally, compare the value of W with the value in "W buffer" to obtain a positive sequence. I believe some people have guessed this method.W buffering.

Of course, there are also some other methods to implement the W buffer. However, no matter what method is used to implement the W buffer, its most important property is the attention distribution in the eye space. Therefore, the precision of the 16-bit W buffer in the processing is ideal. In the previous example, the precision of a 24-bit Z buffer at a kilometer location can only reach 12 centimeters. However, the 16-bit RMB W buffer indicates that the precision of each place is 1.5 cm. Therefore, in this example, the 16-bit RMB w buffer is better in the processing table than the 24-bit RMB Z buffer.

Furthermore, W buffer has a better position, that is, the position of its W near plane (relative to the Z near plane in Z buffer) is not important. That is to say, W buffer can simultaneously display the current table and a huge base several kilometers away. In the case of Z buffer, if you want to be able to identify a huge base several kilometers away, you may have to sacrifice the table in front of you.

However, because the precision of the W buffer is evenly distributed, the precision of the W buffer at Z near is not as good as that at Z buffer. Although the precision of Z buffer in Z near is higher than or equal to 0.0000076, W buffer compaction may be lower. For example, the precision of the 1.5 Public score is very accurate to the actual thing, but for the thing close to the observer, it is a lack of clear understanding. For example, there may be a notebook with a thickness less than 1.5 centimeters on the table. At this time, the precision of 1.5 cm is completely inaccurate.

It seems that W buffer cannot solve the problem! In fact, this is not the case. If there is a 24-bit W buffer, we can see things 10 kilometers away (this should be very difficult ), the precision is about 0.6. In general, such precision is quite satisfactory. In addition, W buffer is easy to use and does not require any major modifications to the program.

Currently, the biggest problem with W buffer is the lack of support. Some compaction chips do not support W buffer at all, while others only support W buffer of 16 bits. However, at present, many display chips have started to support W buffer, so more memory buffers will be used in the future!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.