Global illumination Technology Evolution History 1-ray tracing

Source: Internet
Author: User



The plan is to concentrate on writing a book in the last year, do not spend your mind and energy to write a blog, because writing a high-quality blog post is actually going to take a lot of time to conceive: a single blog Although less text but you may need to spend more on the limited space to include more contextual information, and more concise organization of content, As far as I'm concerned, it works as well as book content writing (and of course, if the author is less rigorous about his own requirements).



The game engine global lighting technology takes a new form of publishing, starting with the first chapter of writing, actively interacting with the community and starting to advertise, in a way that is consistent with the idea of a game release: the continuous release of beta active and player interactions during the development phase, and the collection of feedback for continuous improvement. The advantage of this is: the reader earlier to obtain the probation version of the information, understand and supervise the quality of the book, so as to make effective judgments is worth buying, poor quality of the book may be directly eliminated at this stage, or even lost the significance of publishing, so as to protect the interests of readers, and for the author, I am able to continuously absorb community feedback to improve the quality of the content, so that the quality of the book can be continuously formed gains, good content can be spread by the community diffusion; This is a win for both the reader and the author.



To this end, I have provided the book three chapters of 157 pages of the text of the probation content for all friends interested in the book free download, if you do not know can download from here. However, from the current community discussion, most readers are not quite sure how this book will be different from other similar books, although I have used a question and answer form to briefly outline the characteristics of the book, but we all know that These words are the same as the words in a booklet that you gave to your company at the tech conference to introduce their products: before you use their products, those words are usually no different from farts.



So I need to write a relatively long article to explain the content and characteristics of the book, popular so that you can read other social information as easy to read, do not need much thinking and understanding, and space needs to be long enough to introduce the content and characteristics of the book coverage. Happened in Baidu Bar "Crysis" see an article want to understand the characteristics of various global lighting technology and links to the post, he raised the question is the book to discuss the content, so I would like to answer his question in fact basically can explain the content and characteristics of the book.



Learning Global lighting Technology (global illumination, hereinafter referred to as GI) one of the factors that compare headaches is that it has too many methods, each of which may involve completely different mathematical methods, and if we do not have a certain understanding of the method itself, it will not be very handy in terms of application. In particular, when it comes to optimization or modification to meet specific needs, you must have a more comprehensive understanding of the approach: its origins, history, mathematical models, the advantages and disadvantages of the new approach, its tradeoffs in computing performance and image quality, and so on, and often each approach is not as independent as a software module. Each GI algorithm often involves CPU/GPU data structure representations, memory layout and access, and the collaboration of other stages of the rendering pipeline (such as deferred SHADING,AA, etc.), the use of graphical interfaces, algorithmic processor-level optimizations, and so on, which makes GI learning less easy.



So I started in the early 2015 to write a book around a variety of GI technology to organize content, it is not like "real-time Rendering" such as books, basically with each theoretical point of view as the center, it is method-centric (nonetheless, The book also contains nearly 300 pages of basic theoretical knowledge of the introduction, focusing on the various GI methods behind the idea and the link between methods, so it is more theoretical books have more practical. This kind of writing is similar to "Advanced Global Illumination" (AGI), but AGI is mainly about path tracking and radiation theory two methods to tell, others such as photon mapping has some introduction, but the space is very short This book will not only introduce the path tracking and radiation theory, such as offline global lighting technology, will also introduce the more popular distance field, Voxel and other real-time global lighting technology, and this book combined with Unreal engine and other game engines to tell, readers better understand the features of these engines.



In this article, let's take a look at the evolutionary history of global lighting technology. Follow the principle of popular, this article will not contain mathematical formulas, all the content is in the text and map description, of course, this way certainly not contain many details, it is more attention to the idea of description, more specific information also please refer to the "Game engine Global Lighting Technology" book content.



Let's start with the ray-tracing algorithm. The ray-tracing algorithm originated as early as 1968, Arthur Appel in a paper called "ray-tracing and other Rendering approaches," the concept of ray casting, which we call Ray projection, as shown in Ray projection is just a single light emitted from a point in one direction, it stops when intersecting with objects in the scene, and the Appel algorithm uses the view Ray and shadow Ray Two rays, which is actually the direct illumination part of the light equation.






In 1979, Turner Whitted, on the basis of light projection, joined the interaction of light with the surface of the object, where light continued to propagate along the surface of the object, reflecting, refracted, and scattered, until it intersects with the light source. As shown, this algorithm forms a recursive light shuttle, so it is no longer a single light, but rather a path of light transmission, at which point the algorithm is called recursive ray tracing (Recursive ray tracing, or whitted-style ray tracing) Obviously, light is reflected or refracted on multiple surfaces, and indirect light is taken into account, and as the light passes through each surface, through the "filtration" of the reflection/refractive index of the surface, it penetrates the object's color to the surface of the neighboring object, which is what we call the bleeding.






However, the whitted model is based on pure specular reflection, which assumes that the surface of the object is absolutely smooth, which is obviously inconsistent with the surface properties of most of the material in nature. In computer graphics, the size of a pixel is far greater than the wavelength of light, in this microscopic size (microfacet), the surface of the object is not smooth, that is, a plurality of light entering a pixel may be reflected in different directions, depending on the surface roughness, These scattering directions are presented in different distributions, and very coarse surfaces may be reflected evenly around the surface, while relatively smooth surface reflections are concentrated near the reflective direction of the smooth surface. In modern rendering techniques, these reflective properties are usually expressed using the Microfacet BRDF formula, which basically uses a simple roughness direction to simulate a more realistic light reflection distribution. In combination with some parameters such as metal, this is the current popular physics-based rendering model.






Usually these different distributions are expressed using both diffuse and specular reflections, as shown in. In 1984, Cook proposed the distribution of ray tracing (distribution ray tracing), which made a single beam of reflected light into an integral calculation around a diffuse or specular reflection range in a space, as shown in. To calculate the integral equation, the Monte Carlo method is introduced, so Cook's method is called random ray tracing (stochastic ray tracing).


Stochastic ray tracing


However, Cook's model calculation is very expensive, and each light emitted from the camera is reflected in more than one direction at the surface, dispersed into multiple beams of light, recursively, and each light eventually forms a light tree (a tree of rays), especially for indirect diffuse light, It is almost reflected to the entire visible space.



Cook's model is determined by the recursive nature of the illumination formula, and the value of each incident light in the illumination formula is derived from the results of many other surface point reflections. In 1986, Kajiya unified the illumination formula, and deduced the path representation form of the illumination formula, making the illumination formula from a recursive structure into a path function integral, so that each random number of Monte Carlo will only produce a path, these paths do not need to be recursive, So each path can be randomly generated, and then the value of each path is used as a random number to calculate the final light result. This new form is called path tracing, as shown in. In the path tracking algorithm, some points are randomly generated on the surface of the object in the scene, then the points are linked together with the light source and the camera to form a path, and each path is a random value of a path function. Such a path, according to the complexity of the scene, each frame may include hundreds of millions of rays, so the traditional path tracking algorithm is difficult to apply to the field of real-time rendering.


Path Tracing


There is a problem with this way of generating a random path, there is a considerable portion of the path combination because the surface may be obscured to form an invalid path, no contribution to the final illumination, so most of the implementation is in the form of incremental, in each effective reflection or refraction in the direction of random sampling, has formed more efficient path;



Another problem is that since the light source area is small relative to the entire scene, the chance that the path from the camera will eventually fall within the area of the light source is small, so bidirectional path tracking (bidirectional path Tracing) is presented, starting from the two directions of the light source and the camera, respectively. After a certain path, the end of the two path is linked together to form a complete path, which greatly increases the effective contribution of the light source, you can see the difference between the two.






Most of the current high-quality offline renderers are basically based on path-tracking algorithms, but the computational cost of path tracking is still very high. The development of the modern path tracking algorithm has two main directions: one is to increase the processor utilization by increasing the efficiency of the processors around the coherence (coherence) between the billions of rays of light, and the Microsoft Metropolis algorithm makes the sampled random path closer to the true color distribution of the final image. The latter is called MLT (Metropolis light transfer).



Ray/path tracking algorithm The high computational cost is not only derived from the large number of rays required by the law of great numbers used by Monte Carlo integrals, but also by the path tracking algorithm and the data structure used by the CPU/GPU. The first is the memory execution model, due to the large difference in the speed of the processor compute unit and the speed of memory data access, the computational unit has a huge delay in  from memory (Latency), and modern processors rely heavily on caching technology, The larger piece of memory data is cached in the cache with higher read speed, as shown in the cache system is usually designed as a multi-tier mechanism, each layer has higher access speed than the next layer, but higher speed of the cache wafer is more expensive, so the higher speed of the cache tends to store more data volume, The first-level cache of the top tier is closer to the speed of the register, and the data access latency is hidden by the cache system, which computes the cell-to-memory.






The caching system is designed according to the characteristics of the traditional application, in general, the data used by the neighboring instruction is also adjacent in the memory area, so the data with larger relative register can be used by multiple instructions. When the instruction is not getting the required data from the upper-level cache, it gets another piece of data from the next cache and replaces the original data, which is known as cache invalidation. Therefore, the application must maintain a certain degree of coherence on the data in order to take full advantage of the caching features to improve computing performance. The path tracking algorithm obviously does not meet such conditions, each light may randomly wear in any direction, so as to intersect with any surface in the environment, so the data used by neighboring instructions are often scattered in the various areas of memory, greatly reducing the chance of cache hit.



Another feature of the processor architecture is the single instruction multi-data (SIMD) computing model, in which registers read multiple data variables at once, which are executed by the same instruction, such as the traditional CPU environment in which the SIMD registers can read 128-bit data, representing 4 32-bit data, respectively. , as shown, and in the GPU environment, each GPU thread bundle can compute 32 threads at a time, and when the data required by these 32 threads is adjacent to the memory structure, they can be stored once, greatly reducing the transaction overhead incurred by each thread acquiring the thread.






So, the path-tracking technology based on coherence (coherence) groups these data into small packets called Ray packet, which contain multiple contiguous memory data and can be executed by the same instruction. The traditional Ray-based technology is mainly for the main light (Primary rays), that is, the light emitted by the camera to the scene, and then the light may be emitted from anywhere in the scene, and has a greater impact on performance.



In 2013, Disney's Christian Eisenacher in a paper called sorted deferred Shading for production path tracing proposed an improved method, The core idea of this approach is to sort the light before the actual calculation, as shown in.






In the actual processing, this is divided into three main steps (such as the flow on the left): first, in each increment to calculate the intersection of light and scene, the light is sorted first, and the ordered light is divided into multiple packages in terms of direction and quantity, and then calculated in these packages; A BVH acceleration structure is created for the scene, and the packaged light is coherent, so that the scene surface in which the intersection calculation involves is spatially contiguous, so that the cache and the SIMD processor features can be used more easily, and finally, because the light from each direction may intersect the same surface, Therefore, it is not very efficient to calculate the light reflection directly in the intersection calculation, so Disney separates the coloring, when the intersection is calculated, all the intersection points are associated with the texture information, the textures are divided into areas, and the shading calculations are calculated in the texture area. With these optimizations, as shown, the new path-tracking renderer for Disney has been dramatically improved, with technologies such as Disney's BSDF being used for the first time in the Super Marine and subsequent films.






The idea of path tracking algorithm based on coherence can make good use of modern processor architecture features, no matter what path tracking algorithm can basically improve in this aspect, which is also the direction of traditional path tracking technology in real-time.



The MLT method based on the metropolis algorithm focuses on more accurate sampling of the paths to compute higher quality images, after all, the contribution rate of many path sampling of the traditional path tracking algorithm may be very low. The core idea of the metropolis algorithm is to use a Markov chain (Markov Chain), which carries out an appropriate scale perturbation (perturbation) of the current random number to produce a new random number, and then uses the probability of approximating the real distribution function to choose the new random point, This allows the new sampling point to satisfy the distribution of the actual distribution function.






Since each random number in the path-tracking algorithm generates a path-computed illumination result, each random number in the MLT is a path, and the new path is perturbed by some strategy to create a new path for some parts of the previous path. Then the calculated illumination result of the path is the number of illumination calculated for each random number.



Since each random number in MLT is the result of a path, the distribution of the entire path is the color distribution of the entire image, so MLT calculates the result of the entire image, and then each pixel needs to use a filter to weighted average the color values around each pixel according to a certain weight ratio.






The original MLT algorithm disturbs the vertices in the path directly in path space, so that the newly generated path may be obscured, resulting in very many invalid discarded random numbers, Csaba Kelemen equals the 2012 paper "A Simple and Robust Mutation strategy for the Metropolis Light Transport algorithm is presented in a hypercube (hypercube) scope, the so-called primary Space perturbation of the path, each random number within the hypercube is a random number generated by a uniform distribution within [0,1], which uses a BRDF-like inverse transformation algorithm to find the actual direction or illumination random number, which is used to generate a new path, which is based on primary The random path generated by space has a high acceptance rate.






The updated MLT-based algorithm generates better disturbances that conform to the surface illumination distribution based on path derivation, for example, for high light, it can select more orientations in the high light direction, so that the results of the MLT calculation are closer to the distribution of the real image, as shown in. On the path derivation, and other such as gradient-based MLT algorithm, this makes the random number distribution of Markov chain more close to the real distribution of the image, is the path tracking technology is currently a hot topic.



Global illumination Technology Evolution History 1-ray tracing


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.