From Wakapon
Revision as of 18:50, 30 March 2011 by Patapom (talk | contribs)
Jump to: navigation, search

This page talks about the complex task of rendering clouds in real-time. It's more a guide through my own experience in attempting to render clouds rather than an extensive review of all existing techniques. The target audience is rather the skillful amateurs of the demoscene and video games developers community than the expert scientist and I hope I didn't make too many mistakes in the various formulas and statements used herefater.


First you must know that I've been in love and studied clouds for a pretty long time now. I was taking pictures of sunsets from my bedroom's window when I was 15, I'm now 35 and nothing has changed : I'm still amazed by the variety and colors and shapes that clouds can bring to an otherwise "sad" blue sky.

Cumulo-Nimbus

My favorites are the storm clouds : the humongous cumulo-nimbus.

When it comes to rendering, you have 2 difficulties to overcome : lighting and shape. Both are difficult and very important.


Cloud Types

You may underestimate the importance of the shape of a cloud, especially because they look random so you might think a random shape is okay. But then you are plain wrong, because clouds are all but random.

Shape

If you watch a time-lapse video of a cloud, you can see how clouds move, how they are created and destroyed. One main observation to make is that accelerated clouds motion make them look like regular smoke. This is an important fractal pattern : clouds are like smoke, but bigger. And as they are orders of magnitude larger, the time scale is orders of magnitude smaller. It will take several minutes for the huge mass of a cloud to accomplish the same motion as smoke. A bit like the time for humans seems like slow motion as compared to insects.

You can see in the example video above how the cloud is "pulsed" from hot air pockets below. You can also see how there are "sources" and "sinks" that make the cloud appear and disappear.

All these obey the complex laws of thermodynamics : hot humid air rises up and evaporates while cold air falls down and liquefies, all these occurring at precise temperature and pressure, depending on the altitude.

Clouds are composed mainly of water in all its states : vapor, liquid (i.e. small droplets) and solid (i.e. snow and ice crystals). Clouds also need "nuclei" particles to form, like dust or aerosol molecules : despite large humidity quantities in the air, if there is no nuclei (i.e. the air is too clean), there won't be any cloud.

Various cloud types and their respective altitudes

Every Altitude its Cloud

Most of the clouds are very compact and fit in a "tiny" altitude range, although the storm cumulo-nimbi clouds span the entire range of altitudes, some of them going up to 15 kilometers (!!).

The scales in these clouds vary dramatically. Thin clouds are easily traversed by light while thick clouds are mostly reflective. Light scattering in clouds is really important because of the scales browsed by clouds : a photon doesn't go in a straight line for a long time when pacing through the many hundreds of meters inside a cloud.

Several ways for modeling clouds

Please, read the excellent Ken Perlin's presentation so you get familiar with the different Perlin noises.

Some are great for introducing regular randomness (plain noise), some others are good for clouds (sum 1/f(noise)) while even others are good for terrain modeling (sum 1/f(|noise|)) and are given funny names like "ridged multi fractal" or "fractal brownian motion".

Once you have your basic noise function, it's all a matter of intelligently combining several noises at several amplitudes and frequencies.


Alto/Cirro cumulus & stratus

These are thin vaporous high-altitude clouds. They are the easiest and most commonly simulated clouds as they are very thin and can be easily mapped to a plane, as shown on the image below.

For alto/cirro cumulus forms, there is a clear distinct "cauliflower" pattern that Perlin noise can easily achieve using the Turbulence pattern, as shown below :

Perlin noise : Turbulence variation.

For a great introduction to the complex light scattering within clouds, and the rendering of thin cloud slabs, I invite you to read the (quite technical) paper "Real-time realistic illumination and shading of stratiform clouds" by Bouthors et al.

You're okay sticking to a 2D cloud plane (even faking thickness through parallax mapping), you can even simulate up to the already thick stratocumulus until you need to display distinct clumps, holes or fly through the clouds. From this point on, you need to really render 3D clouds.


Nimbostratus, Cumulus & Cumulo-nimbus

These are the big guys (and my favorite S13.gif) !

I remember being baffled by the foundation paper "A Simple, Efficient Method for Realistic Animation of Clouds" by Dobashi et al. in 2000, where these guys managed to render impressively realistic clouds using particles while the clouds were modeled using a cellular automaton. This paper served as a base for Mark Harris and his well known clouds.

MarkHarris.png

This team of Japanese, when it comes to imitating reality, is really excellent ! They are also responsible for the foundation paper "Display Method of the Sky Color Taking into Account Multiple Scattering" about the sky model later implemented in real time by Sean O'Neil. On the downside though, like me, their english sucks and I remember re-reading 10 times the same sentence to try and understand what they meant.


Best way yet : CFD

Computational Fluid Dynamics, in my opinion, is yet the best way to achieve great cloud simulation. I never actually tried large scale 3D CFD simulations but the few ones I've experimented with allowed me to create a pretty convincing smoke plume. And as stated earlier, clouds are nothing more than smoke in slow-motion. All you have to do is to create a sufficiently large 3D playground, model wind, hot air rising, cool air falling down and advect water droplets and various parameters along the resulting velocity fields. I'm pretty sure this will create really great clouds !

The only problems are :

  • Memory => I'm not sure we're ready yet to experiment on volumes the size of a city for real-time applications
  • Scalability => Splitting the computation to multiple GPU threads. I bet Compute Shaders and Cuda could help on that but I don't know anything about them. A good thing though is that clouds evolve slowly, so updating a new cloud state every 30 seconds is quite okay (except for dramatic fast-forward effects that might occur in games).


Many tests through the years

Of course, I experimented with cloud particles myself at that time and I obtained some nice results but these clouds always kinda lacked the fluffiness I was after. This I learned : you can't really have great fluffy clouds with particles. First because of their disjoint nature (you're not using a mesh or tracing through a continuous volume here, merely splatting flat stuff together), then because of the lighting model that failed to account for important scattering events. Basically, with real-time particle models, you can take into account extinction and 0- or single-scattering events so you get the main features of a cloud, but you lack the general "denseness" and solid-feeling of cumuli and cumulo-nimbi clouds.


I tried many things, with more or less satisfying results :

  • Soft particles with precomputed Spherical Harmonics lighting, the lighting was quite okay but the clouds were obviously too "solid looking"
  • Marching cubes and other "mesh clouds" rendering, my intention was to render these in several depth buffers seen from front and back and to "subtract" the depths so I obtained the thickness of the mesh at a particular point in space (I first had this idea in 1995 and managed to have one of the first volumetric lights ever with my software renderer, I never used it in a demo though and the effect has since been re-used countless times). This could have worked but only computing the absolute thickness of an object, without accounting for holes and concavities always yielded poor results where it seemed light "poured through the volume" and although you got interesting god ray effects, the rendering was really not looking anything like a cloud.
  • Mega-particles, I first encountered this "technique" about 6 months ago (like november 2010) and, although I couldn't see how the effect worked (it looked like quite a "I bullshit you" effect), I decided to give it a try. This yielded interesting results but not because of the effect itself, as it's mainly a post-process and rendering a 3D cloud as if it was a 2D plane gives nasty "scrolling artefacts" when you move the camera. No, the main interesting rendering achievement here was because it was the first time I really correctly used phase functions AND a now quite old technique called "Deep Shadow Maps". This is really an interesting read !


For clouds modeling, I wrote a "recursive sphere aggregation" software where you drew your base cloud shape using big spheres, then the software created smaller "level 1" spheres on the surface of the these "root spheres" and simulated auto-spacing, then recursively created smaller "level 2" spheres and simulated again, and again. I managed to obtain nice cloud shapes with that software but I the rendering was awful. I never managed to have a nice rendering AND modeling at the same time...


Until finally...

It's only recently that I finally came up with the actual way of rendering nice volumetric clouds, some that you can change dynamically and view from anywhere, even from inside !

To do this, there is no other solution than rendering a 3D volume : a volume that tells you for each position in space whether or not there is a density of "cloud material". It's also very recent that it has become quite tractable to use 3D textures and even more recent that shaders allow you to perform ray-marching steps, the first examples of this being parallax mapping that is simply "terrain marching" to find the intersection of a ray with a height field.

My friend David was the first one (it's been a couple years already) to come up with super nice real-time clouds by ray-marching a volume, as seen in the picture below. His technique is a mix of mesh rendered into a volume texture, coupled with pre-computed spherical harmonics for lighting and various real-time 2D noise for shape variations, and that's brilliant !

U2Clouds.jpg


Rendering

You don't change a winning team : ray-marching a volume texture is the key to realistic 3D clouds.


Ray-marching

The absolute key to a nice ray-marching is the amount of ray-marching steps. The more the better !

For a nice rendering I needed about 128 steps but you can't write a shader with a for loop of 128 steps for a 1024x768 surface so I exploited another great feature of clouds : their low frequency nature. The fact that clouds are large and only display small changes in shape made me guess about the impact of rendering not a 1024x768 surface but a downscaled version of 256x192 (downscaled by a factor 4). Ray-marching 128 steps in such a small surface doesn't change the impression that you are looking at clouds, and helps you to maintain a steady and fast frame rate.

On top of that, the downscale factor gives a very flexible way to scale the time used by the effect. For low end machines you can downscale by 8, for very high end machines you can downscale by 2. It's all a matter of experimenting with various hardware and use a scale factor appropriate for both the machine and the time you want to allocate to render clouds (as for most games, you can't spend your entire time on rendering).


So the first trick is this : render low, render fast !


Review on Light Behavior

First, let's review the basics of light behavior within a participating medium. I couldn't advise you more to read the great "Realistic Image Synthesis using Photon Mapping" by Henrik Wann Jensen, to which I refer to as soon as I hit an obstacle or forget something (which is, I confess, most of the time).

To be quick, light goes through straight lines unless it encounters an obstacle, in which case it's either absorbed or scattered in any direction. The choice in scattering direction is quite arbitrary, depends on many parameters like surface smoothness or, in the case of a participating medium (i.e. water, smoke, dust), the type and size of particles the light is traversing :

  • Small air molecules won't absorb light very much unless on very long distances, like from the ground to the upper atmosphere. And it will do so differently depending on the wavelength (which is the reason why the sky is blue). The theory for light scattering by gas molecules is the Rayleigh scattering theory.
  • Large aerosols like dust and microscopic water droplets will rather absorb (i.e. smoke or dust) or reflect/refract (i.e. water droplets) light and will do so on much smaller distances than air molecules. The theory for light scattering by large aerosol particles is the Mie scattering theory.


Light, by going through a participating medium will then suffer 2 things :

  • Absorption. Light is absorbed and transformed into something else, like heat. The amount of absorption suffered by light over a distance is given by <math>\sigma_a</math>
  • Scattering. Light is reflected in any direction. The amount of scattering suffered by light over a distance is given by <math>\sigma_s</math>

The combination of absorption 'and' scattering is called the 'extinction'. The amount of extinction suffered by light over a distance is given by <math>\sigma_t = \sigma_a + \sigma_s</math>, the extinction coefficient.

Extinction

Extinction is multiplicative by nature : you constantly lose energy as you advance inside the medium and light extinction is given by :

<math>L(x,\vec\omega) = e^{-\tau(x,x+\vec\omega\Delta_x)} L(x+\vec\omega\Delta_x,\vec\omega)</math>
<math>\tau(x,x^\prime) = \int_x^{x^\prime}\sigma_t(s)\, ds</math>

where :
<math>x</math> denotes the current position
<math>\vec\omega</math> denotes the view direction
<math>\Delta_x</math> represents the distance the light marched through the medium
<math>\tau(x,x^\prime)</math> is called the optical depth. If your medium has a constant density everywhere, then the optical depth is simply : <math>\tau(x,x^\prime) = \sigma_t\left | xx^\prime \right\vert</math>
<math>L(x,\vec\omega)</math> is the energy that reaches the position <math>x</math> in the direction <math>\vec\omega</math> and is called the 'radiance'


As an example, if your extinction coefficient is constant in your medium, it's equivalent to write : <math>e^{\sigma_t\Delta_x} = e^{\sigma_t\frac{\Delta_x}{2}}\cdot e^{\sigma_t\frac{\Delta_x}{2}}</math>

As a result, we see that it's not important to trace more steps than necessary if no change in the medium's properties exist.


Extinction is one of the 2 results that come out of the ray-marching process. As light's extinction will slightly vary with wavelength, it's visually important to store the extinction factor as a RGB vector.


In-Scattering

In-scattering is the amount of light that gets 'added' to the existing light along the way. Of course we lose some energy because we hit some particles, but we also gain some other that bounced off of nearby particles. That's all a game of statistics really. S1.gif

The expression for in-scattering is much more complicated than extinction and is the main reason why lighting is a difficult process :

<math>L(x,\vec\omega) += \int_{x}^{x+\Delta_x\vec\omega}e^{-\tau(x,x^\prime)}\sigma_s(x^\prime)\int_\Omega p(x^\prime,\vec\omega,\vec{\omega^\prime})L_i(x^\prime,\vec{\omega^\prime}) \, d\vec{\omega^\prime} \, dx^\prime</math>

Wow that's a big one ! (but at least, notice the not very mathy "+=" operator meaning this part is added to the previously seen radiance, attenuated by extinction)

To make things simpler, we can rewrite it this way :

<math>L(x,\vec\omega) += \int_{x}^{x+\Delta_x\vec\omega}e^{-\tau(x,x^\prime)}\sigma_s(x^\prime) I(x^\prime) \, dx^\prime</math>

Basically, this means that along a path, light gets decreased by extinction (the <math>e^{-\tau(x,x^\prime)}</math> part) but also gets added from other directions (the <math>I(x^\prime)</math> part).

<math>I(x^\prime)</math> is also called the 'irradiance' which is the previously seen radiance but integrated in all incoming directions. It loses its directional quality.


So, to sum up, light along a very short step <math>\Delta_x</math> where the medium properties are constant is added in-scattered radiance, or irradiance :

<math>L(x,\vec\omega) += e^{\sigma_t\Delta_x} \sigma_s(x) \Delta_x I(x)</math>


The same way extinction was multiplicative by nature, in-scattering is additive by nature.


In-Scattering is the second one of the 2 results that come out of the ray-marching process. Although extinction is only slightly affected by wavelength and could well be stored as a single factor, in-scattering on the other hand strictly needs to be stored as a RGB vector (that is, if you want your sky to be blue by day and orange at sunset) (who knows ? That might come handy).


Combining Extinction & In-Scattering

Again, for a very short step <math>\Delta_x</math> where the medium properties are constant, we can finally write the radiance :

<math>L(x,\vec\omega) = \sigma_s(x) \Delta_x I(x) + e^{\sigma_t\Delta_x} L(x+\Delta_x\vec\omega,\vec\omega)</math>

We see that it all comes down to :

  • Perform extinction of the radiance at the start of the marching step (i.e. <math>e^{\sigma_t\Delta_x} L(x+\Delta_x\vec\omega,\vec\omega)</math>)
  • Add in-scattered irradiance along the marching step (i.e. <math>\sigma_s(x) \Delta_x I(x)</math>)

You can view the light ray-marching process as gaining some energy (i.e. in-scattering, an addition) then marching a little and losing some energy (i.e. extinction, a multiplication).

What I usually do is to cumulate the extinction and in-scattering as 2 distinct RGB values. The pseudo-code for ray-marching goes something like this :

void  RayMarch( float3 _In, float3 _Out, float3& _Extinction, float3 _InScattering )
{
  float3  Step = (_Out - _In) / (N+1);         // Our march step
  float   StepSize= length(Step);              // Size of a unit step
  float3  CurrentPosition = _Out - 0.5 * Step; // Start at the end of the ray

  _Extinction = 1.0;   // We start with full energy as we have not entered the medium yet...
  _InScattering = 0.0; // ...but no in-scattered energy yet
  for ( int StepIndex=0; StepIndex < N; StepIndex++ )
  {
    // Get extinction & scattering coefficients at current position
    float3 Sigma_t = GetExtinctionCoeff( CurrentPosition );
    float3 Sigma_s = GetScatteringCoeff( CurrentPosition );
    
    // Compute extinction for our single step
    float3 CurrentStepExtinction = exp( -Sigma_t * StepSize );
    
    // Perform extinction
    _Extinction *= CurrentStepExtinction;  // Extinction is multiplied with existing extinction
    
    // Perform in-scattering
    _InScattering += CurrentStepExtinction * Sigma_s * StepSize * ComputeIrradiance( CurrentPosition ); // In-scattering is accumulated to existing in-scattered energy
    
    // March one step backward
    CurrentPosition -= Step;
  }
}

Note that we ray-march backward, starting from the rear and ending at the camera. By slightly rewriting the code it's easy to march forward but I find the backward method more intuitive (no, I'm not gay ! Be serious for a minute there, it's important stuff).


As I said earlier : it's quite okay to discard the RGB nature of extinction and store it as a single luminance factor. If you do so, you end up with a RGB vector of in-scattered energy and a single value for extinction that you can store as Alpha.

You can then combine the resulting RGBA image with any existing buffer (i.e. your pre-rendered scene) with the pre-multiplied alpha blend mode :

TargetColor = TargetColor * SourceAlpha + SourceColor

Which has the same effect as writing :

NewColor = BackColor * Extinction + InScattering


The missing functions

In the pseudo-code above, I used 3 mysterious functions :

  • GetExtinctionCoeff()
  • GetScatteringCoeff()
  • ComputeIrradiance()


Well, if you can read, they're not that mysterious... The first 2 retrieve the extinction/scattering coefficients at a given position. If you modeled your clouds from a density function <math>\rho(x,y,z)</math> then :

<math>\sigma_t = \rho(x,y,z) \Sigma_t<math>
<math>\sigma_s = \rho(x,y,z) \kappa \Sigma_t<math>

where :
<math>\Sigma_t<math> is the global extinction factor that is modulated by density
<math>\kappa<math> is the scattering to extinction ratio or albedo, that should always be < 1 to be physically accurate


In my case, the density is obtained from Perlin noise stored in a 3D texture :

float GetDensity( float3 _Position )
{
  float RawDensity = 0.0;
  float Amplitude = 1.0;
  for ( int OctaveIndex=0; OctaveIndex < OCTAVES_COUNT; OctaveIndex++ )
  {
    RawDensity += Amplitude * Perlin( _Position );  // Fetch Perlin noise value from 3D texture
    Amplitude *= 0.5;
    _Position *= 2.0;  // Double the frequency
  }   
  
  return saturate( RawDensity - DensityOffset );
}


Light Diffusion Method

I first tried to ray-march a 3D texture of 128x32x128. I used a 2-tiered process :

  • the first process was a "reaction-diffusion" of light in a RGBA16F 3D texture
  • the second process was displaying the diffusion texture through ray-marching


What is this light diffusion thing, you might ask ?

Well I decided to split light into 2 parts : directional and isotropic.

If you read the paper by Bouthors et al. that I mentioned earlier about lighting stratiform clouds, you will read interesting facts about light scattering within clouds. The main fact is that clouds have a very high albedo of about 90%, meaning that they absorb almost no light. Most of the light is scattered. And among these scattering events, 50% are in the original light direction. This means there is a very strong forward scattering, or directional component in light going through a cloud.

The remaining 50% of the light is, grossly, isotropically scattered.

This is what I call the Watoo-Watoo phase function (from, an old cartoon I used to watch when I was a kid) :

WatooWatoo.png


You need to create a double-buffered RGBA16F 3D texture of, say, 128x32x128 pixels. Red will contain the directional light component while Green will contain the isotropic light component. Blue will contain the noise density and Alpha is not used. These buffers are initialized with 0 except the blue component that can be initialized with the initial noise density (although you can also use the shader to compute the noise in real-time).

The diffusion process then goes like this :

  • To propagate the directional component, we :
    • Read the current cell's RGB value (R=Directional G=Isotropic B=Density) => We call this RGB value <math>RGB_c</math>
    • Read the cell at P=CurrentPosition - LightDirection => We call that RGB value <math>RGB_d</math>
    • The new current cell's directional light component is equal to :
<math>RGB\prime_c = RGB_d + e^{\tau(\omega_d)}</math>

where:
<math>\tau(\omega_d)=\sigma_t\frac{(\rho_d+\rho_c)}{2}\Delta_x</math>
<math>\rho_d = RGB_d\cdot z = </math> cloud density one cell away in light's direction
<math>\rho_c = RGB_c\cdot z = </math> cloud density of current cell
<math>\Delta_x =</math> cell size (that we chose equal to 1 for simplicity)
<math>\sigma_s =</math> extinction coefficient

This corresponds to the directional energy of one cell away toward the light, attenuated by extinction. Extinction being the combination of absorption (almost 0 in a cloud, as seen earlier) and scattering.


The diffusion process within a bounded volume