From Wakapon
Jump to: navigation, search
Line 66: Line 66:
 
I also use a smaller frustum for the environment map rendering than the actual camera frustum to concentrate on object close to the viewer. Another option would be to "2D project" vertices in 1/DistanceToCamera as for conventional 3D object so we maximize resolution for pixels that are closer to the camera but I haven't found the need yet (anyway, I haven't tested the technique on large models either so maybe it will come handy sooner than later !).
 
I also use a smaller frustum for the environment map rendering than the actual camera frustum to concentrate on object close to the viewer. Another option would be to "2D project" vertices in 1/DistanceToCamera as for conventional 3D object so we maximize resolution for pixels that are closer to the camera but I haven't found the need yet (anyway, I haven't tested the technique on large models either so maybe it will come handy sooner than later !).
  
=== What do we render ? ===
+
=== What do we render in the SH env map ? ===
  
 
We have the power of the vertex shader to process SH Nodes (that contain a position and 9 SH coefficients as you remember). That's the ideal time to process the SH in some way that allows us to make the environment fully dynamic.
 
We have the power of the vertex shader to process SH Nodes (that contain a position and 9 SH coefficients as you remember). That's the ideal time to process the SH in some way that allows us to make the environment fully dynamic.
  
I decided to encode 2 kinds of information in each SH vertex :
+
I decided to encode 2 kinds of information in each SH vertex (that are really ''float4'' as you can see) :
 
* The direct light occlusion in W
 
* The direct light occlusion in W
* The indirect lighting in XYZ
+
* The indirect diffuse lighting in XYZ
  
 
But direct lighting is harsh, it's very high frequency and creates hard shadows. You should <u>never</u> encode direct lighting in SH, unless you have many SH bands and we only have 3 here (i.e. 9 coefficients).
 
But direct lighting is harsh, it's very high frequency and creates hard shadows. You should <u>never</u> encode direct lighting in SH, unless you have many SH bands and we only have 3 here (i.e. 9 coefficients).
  
That's why only the direct '''sky light''' will be used.
+
That's why only the direct '''sky light''' will be used as direct light source, because it's smooth and varies slowly.
 +
 
 +
The indirect diffuse lighting, on the other hand, varies slowly, even if lit by a really sharp light like the Sun. That's because it's a diffuse reflection of the Sun, and diffuse reflections are smooth.
 +
 
 +
 
 +
We also provide the shader that renders the env map with 9 global SH coefficients, each being a ''float4'' :
 +
* The monochromatic Sun light in W that will be encoded as a cone SH
 +
* The Sky light in XYZ
 +
 
 +
What we are going to do is basically :

Revision as of 00:39, 4 January 2011

Incentive

Your typical ugly scene with no ambient

Using Nuaj and Cirrus to create test projects is alright, but came a time where I needed to start what I was put on this Earth to do : deferred HDR rendering. So naturally I started writing a deferred rendering pipeline which is quite advanced already. At some point, I needed a sky model so, naturally again, I turned to HDR rendering to visualize the result.

When you start talking HDR, you immediately imply tone mapping. I implemented a version of the "filmic curve" tone mapping discussed by John Hable from Naughty Dog (a more extensive and really interesting talk can be found here [1]) (warning, it's about 50Mb !).

But to properly test your tone mapping, you need a well balanced lighting for your test scene, that means no hyper dark patches in the middle of a hyper bright scene, as is usually the case when you implement directional lighting by the Sun and... no ambient !


Let's put some ambience

That's when I decided to re-use my old "ambient SH" trick I wrote a few years ago. The idea was to pre-compute some SH for the environment at different places in the game map, and to evaluate the irradiance for each object depending on its position in the network, as shown in the figure below.

SHEnvNetwork.png

The algorithm was something like :

For each object
{
 Find the 3 SH nodes the object stands in
 ObjectSH = Interpolate SH at object's position
 Render( Object, ObjectSH );
}

And the rendering was something like (in shader-like language) :

float3  ObjectSH[9]; // These are the ObjectSH from the previous CPU algorithm and change for every object

float3   PixelShader() : COLOR
{
 float3 SurfaceNormal = RetrieveSurfaceNormal(); // From normal maps and stuff...
 float3 Color = EstimateIrradiance( SurfaceNormal, ObjectSH ); // Evaluates the irradiance in the given direction
}

The low frequency nature of irradiance allows us to store a really sparse network and to concentrate the nodes where irradiance is going to change rapidly, like near occluders or at shadow boundaries. The encoding of the environment in spherical harmonics was simply done by rendering the scene into small cube maps (6x64x64) and encode each texel using its covered solid angle (the solid angle for a cube map texel can be found here).

This was a neat and cheap trick to add some nice directional ambient on my objects. You could also estimate the SH in a given direction to perform some "glossy reflection" or even some translucency using a vector that goes through the surface. That was the end of those ugly normal maps that don't show in shadow !

And all for a very low memory/disk storage as I stored only 9 RGBE packed coefficients (=36 bytes) + a 3D position in the map (=12 bytes) that required 48 bytes per "environment node". The light field was rebuilt when the level was loaded and that was it.

Unfortunately, the technique didn't allow to change the environment in real time so I oriented myself to a precomputed array of environment nodes : the network of environment nodes was rendered at different times of the day, and for different weather conditions (we had a whole skydome and weather system at the time). You then needed to interpolate the nodes from the different networks based on your current condition, and use that interpolated network for your objects in the map.

Another obvious inconvenience of the method is that it's only working in 2D. That was something I didn't care about at the time (and still don't) as a clever mind can always upgrade the algorithm to handle several layers of environments stacked vertically and interpolate between them...


Upgrade

For my deferred rendering though, I really wanted something dynamic and above all, something I would render in screen space as any other light in the deferred lighting stage. I had omni, spots, directionals so why not a fullscreen ambient pass ?

My original idea was neat but I had to compute the SH for every object, and to interpolate them manually. I didn't like the idea to make the objects dependent on lighting again, which would defeat the purpose of deferred lighting.

SH Environment Map

My first idea was to render the environment mesh into a texture viewed from above and let the graphic card interpolate the SH nodes by itself as it is, after all, something it was built for.

I decided to create a vertex format that took a 3D position, 9 SH coefficients and triangulate my SH environment nodes into a mesh that I would "somehow" render in a texture. I needed to render the pixels the camera can see, so the only portion of the SH environment mesh I needed was some quad that bounded the 2D projection of the camera frustum, as seen in the figure below.

SHEnvFrustumQuad.png

alt The Delaunay triangulation of the environment nodes network, rendered into a 256x256 textures attached to the camera frustum

Again, due to the low frequency of the irradiance variation, it's not necessary to render into a texture larger than 256x256.

I also use a smaller frustum for the environment map rendering than the actual camera frustum to concentrate on object close to the viewer. Another option would be to "2D project" vertices in 1/DistanceToCamera as for conventional 3D object so we maximize resolution for pixels that are closer to the camera but I haven't found the need yet (anyway, I haven't tested the technique on large models either so maybe it will come handy sooner than later !).

What do we render in the SH env map ?

We have the power of the vertex shader to process SH Nodes (that contain a position and 9 SH coefficients as you remember). That's the ideal time to process the SH in some way that allows us to make the environment fully dynamic.

I decided to encode 2 kinds of information in each SH vertex (that are really float4 as you can see) :

  • The direct light occlusion in W
  • The indirect diffuse lighting in XYZ

But direct lighting is harsh, it's very high frequency and creates hard shadows. You should never encode direct lighting in SH, unless you have many SH bands and we only have 3 here (i.e. 9 coefficients).

That's why only the direct sky light will be used as direct light source, because it's smooth and varies slowly.

The indirect diffuse lighting, on the other hand, varies slowly, even if lit by a really sharp light like the Sun. That's because it's a diffuse reflection of the Sun, and diffuse reflections are smooth.


We also provide the shader that renders the env map with 9 global SH coefficients, each being a float4 :

  • The monochromatic Sun light in W that will be encoded as a cone SH
  • The Sky light in XYZ

What we are going to do is basically :