Tips on importons and irradiance participant in Mental Ray

Source: Internet
Author: User

Some tips, not written by me.

 

CTRL. studio04-16-2008, pmhi there!

Mentalray 3.6 + comes with importons and irradiance participant.
Here some tests and elucidations, eventually a geoshader to enable the new features.

Importons are an implementation to compute importance-Driven Sampling maps.

They can be used for importance-driven photon maps as a merging function for photons.
IE, photons close to an importon will be saved to the map, those that are not, will be just discarded. this will lead to very light photon maps, but also very precise, because we just discard photons which have not a great impact on the final image, thus we have more photon density where we just need it (OK, photons are not really 'just 'discarded .. when a photon is discarded its power is redistributed over nearby photons ).

To engage the importons with photons, you have to set to zero the 'merge' parameter in the global. illumination part and to something non-zero the 'merge' parameter in the importon rolout. this will tell mentalray to use importons to merge photons, instead of a fixed merging distance (if non-zero the merge Param In the GI rolout ).

Importons are shot in the importons phase, that happens before photon shooting. importons are then discarded once the photon map is saved. in the verbosity you'll find how many photons are merged and how many are saved to the map. importons shooting depend on the image size if you use the 'density 'parameter, or can be shot arbitrary with an 'initted' parameter.

When using importons together with photon maps, the 'traverse' parameter shocould be checked. In this mode importons will not be blocked by object and will continue evaluating importance for any further intersection.

The pratical way to use importons is to shot a lot of photons. The problem with photons is not photon shooting but the photon map: Access, balancing, storing and network sharing.

Generally, on 32bit, when something more than 15 milions of photons are stored in a map, we'll start having some problem. first the map cocould be round 200/300 mb and more. then mentalray will just crash before attemping to save the map, just by trying to optimize it before goto save. now, why we're re going to shot 15mil of photons.

There're a couple of techniques, generally, to deal with photons.
Have a low detail solution just to bring the overall light conditions and then get detail with a final gather pass.
Have an high detail photon map to deal more accurately with light details and then have contact shadows and such with a final gather pass.

Now, we have a third solution.
We can use photons together with importons to have both an accurate light bouncing while achiving very minimal details.

An example.
Here I shot 5 milions photons in a Cornell box.
Photon shooting is taking no more than 30 secs.
I'm working on two dual-quads in a distributed env.
Once the map is saved, it's size is round 102 MB. that's just enough for my DBR rendering, as I have to wait the map broadcasting to the slave, and as the scene is very light, I will end up waiting for the slave to get its photon map when the master has already finished, ie. wasting of render time.
Also the results, are those classic results we achive with photon maps. IE. very poor details. just look at the Cube with its bottom part in dark, how is un-natural the shadowing there.

Http://img377.imageshack.us/img377/485/impphotonysd6.png

Now, we can try with importons. Full density. And a merging distance of 0.05 (cm, while photon radius is 1 cm) and traverse enabled.

Http://img379.imageshack.us/img379/6163/impmergingverbosityzd1.png

The photon map is now 1kb.
From the verbosity we can see how only around 200.000 photons are necesarry to cover the 'impant ant' zones of our image. as the distribution of photon is importance based, we shot a lot of photons covering all the zones with a sufficent density so that when merged via importons we'll remain with enough photons to get out small details. just take a look at the Cube shadow, as was before without importons, and how it's now.

Http://img74.imageshack.us/img74/6382/impphotimplc2.png

(Edited: As importons are used to merge photons and then discarded, only the photon map will be available. that means, one can use Maya photon map visualizer to see photon distribution (with and without importons) directly in the viewport ). :)

========================================================== ========================================================== ====================

Then, we can use importons Without photons and with the help of irradiance participant.
Beside the fact you need a minimun amount of importons to get out a smooth image, the most important parameter here (before, with photons, was the merging, here it's discarded) is trace depth. we need to let bounce a lot our importons in the scene to have an importon map suitable for a complete walkthrough (for example ). start with density of 0.2 and go up. for depth start with something 2.

Irradiance participant.
Let me attach a mentalimages description of this 'noel' technique:

'A short description of the technique may be given as follows: Before rendering,
Importons are shot to the scene from the camera. Their hit positions
Information on the amount of direct (and possibly indirect) illumination coming
Their position (hence the name "irradiance participant") are combined into a map.
Optionally, one or more passes of indirect illumination can be computed.
Nature of the algorithm is that the computation is importance-driven.
Rendering, irradiance participant are used to estimate the irradiance for every
Shading point; if only direct illumination is collected for irradiance participant, then
This is equivalent to one bounce of Indirect lighting. irradiance can also be
Interpolated from precomputed values at participant 'positions .'

Parameters are close to those of finalgather.
There's an amount of Ray shot over the particle sample.
There's a way to have the calculation interpolating over particle position (also only for secondary rays) and a mode where we're re acting like in a brute force approach, where no interpolation is used.
Passes means how many indirect bounces we'll consider when calculating the irradiance, ie, something like FG diffuse bounces.
Eventually, if you have an environment, like a mr_sky, irradiance participant implementation will support a different set of sampling parameters to deal just with that.

Here some images to demostrate the new feature.
Only on the first image a full importon + irradianceparticles pre pass was shot. for all the others I just freezed the map and simply gone for rendering. the pre-pass took around 30mins. all the subsequent frames, around 3 mins (1 K ).

Http://img396.imageshack.us/img396/2710/shot001rg2.png

Http://img120.imageshack.us/img120/6391/shot060hn3.png

Http://img72.imageshack.us/img72/8105/shot0109ut1.png

(Importons density 0.5, depth4; irradianceparticles, 680 rays, 2 passes, 32 interp, 480 envrays)

Here, also an animation to see how irradiance participant are flicker free.
The solution is a medium detail solution. there're still some blotches on the parts in shadow. but this bloches are not flickering as they are not computed for every frame. (banding comes with the Web compression ).

Http://rapidshare.com/files/107992908/output3.mov.html

========================================================== ========================================================== ====================

Another feature that comes with mr3.6 + is an advanced framebuffer memory management. indeed, the cached mode. when in cache mode you can render any image size (even on 32bit ). in fact, if enabled, only a small fraction of the resulting image (or user framebuffers) is present in memory: Newly rendered tiles and tiles recently accessed.

This mode shoshould be used only for batch rendering (it will crash Maya if used for the render view .. also it does not make too much sense rendering big size images in a viewer ). I just rendered a 20 k image, in floating point, on a 32bit system. slower than the others methods, it shocould be used only if Mr is not able to create a framebuffer of a huge size (generally more than 4 K images on 32bit ).

RC 0.2 info: Option: FB mem management cached
RC 0.2 info: Option: Image Type interpolate
RC 0.2 info: 0 rgba_fp Yes
RC 0.2 info: Camera: Focal Length 1.37795
RC 0.2 info: Camera: aperture 1.41732
RC 0.2 info: Camera: aspect 0.8
RC 0.2 info: Camera: resolution 16000 20000

========================================================== ========================================================== ====================

A couple of things to use importons and irradiance participant in maya2008sp1.
You shoshould avoid any Maya shader:
'Ss RESS all Maya shaders' shoshould be checked. 'export with Shading Engine 'instead shouldn' t be checked, In the Shading Engine node (tests were made with mia_material ).
The same goes for lights. 'reress all Maya shaders' need to be checked, and a custom light shader shoshould be supplied (Mr sky portals work good ).
Finally, in the rendering settings, Goto translation-> customization and de-check the 'export state shader '.

Eventually read the description file that comes with the geoshader for a more detailed description of paramters and such.
64bit version available (remove the. x64 postfix ).

Have fun,
Max

 

 

Dagon197804-21-2008, pmnice, but honestly, I see no diference when using FG + GI and AO with color bleed.
You get the contac shadows With the skyportals, wich are area lights passing the light and color info of the Env to the interior. and if you mix that with the AO color bleed you have some kind of extra shadow .. color bleed with contact shadow, that will lead to the same result of importons and IP.
And faster

I think you're right for the interpolated IP.
I wanna say something more about this, I think the brute force IP it's a great improvement (it's much faster then FG brute force or path tracing, even the puppet integration) but I'm also a bit worried about the future direction of this kind of features.
OK, now we have a good start point (importons), they are great for unbiased renders (or something like this) why not to think about a progressive tracing (look at the lightcache and PPT in vray... and the next year they have a new interactive/progressive engine based on that )?
What I'm trying to say is: We can't have a feedback with Mr develepers, we can't see where is the direction of the new features implemented, and surely this is not good, because when times they dont take the right direction for us, IMHO.
Mental image developers, you must listen your user's needs if you want to progress !! Please !!

Now in the right hand we have zap here, and we can talk about shaders, and I'm really sure he take in consideration all the feedback we do, and we can see the benefits about this, we have great shader integrated in mray now, this is the real mray flagship right now.

In the other hand we have the core development (and I can criticize into choices here, but I wanna take the constructive way ), why they can't have a forum (c 'mon we are in 2008 !!), Even the mray website it's very depressive (mray 3.3 Information? Oh my ...)
And then I can see the consequences of this... Too features needed, some features developed, but no one it's really convincing...

OK, think abunt the interpolated Ip, I'm really hoping they can make it better in the next release, but hey, looking at the past problems (I was talking about the 3.5fg interp problem from the first day I worked with it, and nothing changed in... 3 years ??) I'm a bit worried...
Right now, for what I can understand (I'm not a technical guy) there is something wrong on the IP mechanic itself, I mean on the interpolated part of the IP.
The good start point of photons (and lightcache, all of the good algorithms for secondary rays) is: If you need an interpolated solution, you can have a smooth and fast calculation for the secondary rays of the diffuse light, and add details with a good algorithm for the primary rays (Fg in Mr, irradiance cache in vray, etc)
Now with IP: You shoot importons for the primary and secondary rays, but you can't control separately the Primary and Secondary rays quality!
IMHO this is the big problem right now with IP interpolated, if you want a good gi you have to shoot writable importons, writable importons = writable rays too shoot, export rays to shoot + export importons = big rendertime. and now you can have an almost perfect IP interp. render but rendertimes are very close to the uninterpolated solution!
I'll do some comparison in the next weeks so you can see it much better.
So, what's the solution?
IMHO it's to separate primary and secondary rays Quality options! And there are two ways you can choose: there is the photon + FG way (it's the mray way) and there is the vray (and turtle and FR and Kray ...) way, separate primary and secondary rays on the core!
I was trying to request this feature at least for 3 year by now... I can't understand why the never heard it! Turtle developers are much more than tives! So, maybe mray core has some technical limitation and you can't do that? Let us know, But search some other solution!

I dont whant to trasform this thread in another whimpering thread, Max, hope you can understand what I'm trying to say, I'm not here to criticise Mr developers, but I wanna be constructive, I wanna do feedbacks, I wanna do feature request etc
But if you dont like this kind of speech I can erw.my words and open another thread for that...

Thanx

Mat

 

 

Irradiance participant 3.7

This algorithm is a novel approach to compute Global Illumination Based onImportanceSampling, which tends to converge much faster to a desirable quality than the existing solutions like Global Illumination photon tracing combined with final gathering.

Before rendering starts, importons are shot from the camera into the scene and collected as a new kind of particle, calledIrradiance Particle. They carry information about the amount of direct illumination coming in at their position (hence the name "irradiance") and, optionally, the amount of indirect irradiance incident at their position (if indirect passes are enabled ). during rendering, the stored participant are used to estimate the Irradiance at a shading point: if just Direct Illumination was collected for irradiance participant, this is equivalent to one bounce of Indirect lighting.

The irradiance can also be interpolated from precomputed values at the particle positions.

The irradiance particle Algorithm simulates some but not all of the Indirect lighting interactions of the traditional Global illumination algorithms in Mental Ray. for this reason, if irradiance participates are enabled then Mental Ray will turn off the Global Illumination photon tracing automatically if it was activated. this is a common situation when external applications are asked to generate Mental Ray scenes with photon shaders attached, which are needed for importons. caustics can be used together with irradiance participant les because they are used to capture indirect lighting effects that irradiance participant les cannot simulate. if both final gathering and irradiance participant are enabled then Final gathering is preferred and irradiance participant les will be switched off automatically.

Irradiance participant support a special IBL-style functionality which can be enabled by setting the number of indirect passes to-1. in this case only the environment map lighting but not diffuse bounces are taken into account. if interpolation is disabled then only environment presampling map is build and no further precomputation steps are required. if interpolation is enabled then participant are emitted in the precomputation pass in a usual way, but used as interpolation points only.

The irradiance participant feature is controlled by scene options and command line arguments of a standalone Mental Ray.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.