## Characteristics of a BRDF

The BRDF of nylon viewed in the Disney BRDF Explorer. The cyan line represents the incoming light direction, the red peanut object is the amount of light reflected in the corresponding direction.

Almost all the informations gathered here come from the reading and interpretation of the great Siggraph 2012 talk about physically based rendering in movie and game production [1] but I've also practically read the entire documentation about BRDFs from their first formulation by Nicodemus [2] in 1977!

### So, what's a BRDF?

As I see it, it's an abstract tool that helps us to describe the macroscopic behavior of a material when photons hit this material. It's a convenient black box, a huge multi-dimensional lookup table (3, 4, or sometimes even 5, 6 dimensions when including spatial variations) that somehow encodes the amount of photons bouncing off the surface in a specific (outgoing) direction when coming from another specific (incoming) direction (and potentially, from another location).

### It comes in many flavours

A Bidrectional Reflectance Distribution Function or BRDF is only a subset of the phenomena that happen when photons hit a material but there are plenty of other kinds of BxDFs:

• The BRDF only deals about reflection, so we're talking about photons coming from outside the material and scattered back to the outside the material as well.
• The BTDF (Transmittance) only deals about transmission of photons coming from outside the material and scattering inside the material (i.e. refraction).
• Note that the BRDF and BTDF only need to consider the upper or lower hemispheres of directions (which we call $\Omega$, or sometimes $\Omega_+$ and $\Omega_-$ if the distinction is required)
• The BSDF (Scattering) is the general term that encompasses both the BRDF and BTDF. This time, it considers the entire sphere of directions.
• Anyway, the BSDF, BRDF and BTDF are generally 4-dimensional as they make the (usually correct) assumption that both the incoming and outgoing rays interact with the material at a unique and same location.
• Also, the BSDF could be viewed as an incorrect term since it not only accounts for scattering but also for the other phenomenon happening to photons when they hit a material: absorption. This is because of absorption that the total integral of the BRDF for any outgoing direction is less than 1.
• The BSSRDF (Surface Scattering Reflectance) is a much larger model that also accounts for different locations for the incoming and outgoing rays. It thus becomes 5- or even 6-dimensional.
• This is an expensive but really important model when dealing with translucent materials (e.g. skin, marble, wax, milk, even plastic) where light diffuses through the surface to reappear at some other place.
• For skin rendering, it's an essential model otherwise your character will look dull and plastic, as was the case for a very long time in real-time computer graphics. Fortunately, there are many simplifications that one can use to remove 3 of the 6 original dimensions of the BSSRDF, but it's only recently that real-time methods were devised [3].

### First, what's the color of a pixel?

Well, a pixel encodes what the eye or a CCD sensor is sensitive to: it's called radiance.

Radiance is the radiant flux of photons per unit area per unit solid angle and is written as $L(x,\omega)$. Its unit is the Watt per square meter per steradian ($W.m^{-2}.sr^{-1}$).

• $x$ is the location where the radiance is evaluated, it's a 3D vector!
• $\omega$ is the direction in which the radiance is evaluated, it's also a 3D vector but it's normalized so it can be written as a couple of spherical coordinates $\langle \phi,\theta \rangle$.

The radiant flux of photons –or simply flux– is basically the amount of photons/energy per amount of time. And since we're considering a single CCD sensor element or a single photo-receptor in the back of the eye (e.g. cone):

• We only perceive that flux in a single location, hence the "per square meter". We need the flux flowing through an infinitesimal piece of surface (at least, the area of a rod or a cone, or the area of a single CCD sensor element).
• We only perceive that flux in a single direction, hence the "per steradian". We need the flux flowing through an infinitesimal piece of the whole sphere of directions (or at least the solid angle $d\omega$ covered by the cone or single CCD sensor element as shown in the figure below).

So the radiance is this: the amount of photons per seconds flowing along a ray of solid angle $d\omega$ and reaching a small surface $dA$. And that's what is stored in the pixels of an image.

A good source of radiance is one of those HDR cube maps used for Image Based Lighting (IBL): each texel of the cube map represents a piece of the photon flux reaching the point at the center of the cube map. It encodes the entire light field around an object and if you use the cube map well, your object can seamlessly integrate into the real environment where the cube map photograph was taken (thanks to our dear Paul Debevec) (ever noticed how movies before 1999 had poor CGI? And since his paper on HDR probes, it's a real orgy! ).

But IBL is also very expensive: ideally, you would need to integrate each texel of the cube map and dot it with your normal and multiply it by some special function to obtain the perceived color of your surface in the view direction.

And guess what this special function is?

Well, yes! It's the BRDF and it's used to completely describe the behavior of radiance when it interacts with a material. Any material...

## Mathematically

We're going to use $\omega_i$ and $\omega_o$ to denote the incoming and outgoing directions respectively. Each of these 2 directions are encoded in spherical coordinates by a couple of angles $\langle \phi_i,\theta_i \rangle$ and $\langle \phi_o,\theta_o \rangle$. These only represent generic directions, we don't care if it's a view direction or light direction.

For example, for radiance estimates, the outgoing direction is usually the view direction while the incoming direction is the light direction. For importance estimates, it's the opposite.

Also note that we use vectors pointing toward the view or the light.

The integration of radiance arriving at a surface element $dA$, times $n.\omega_i$ yields the irradiance ($W.m^{-2}$):

$E_r(x) = \int_\Omega dE_i(x,\omega_i) = \int_\Omega L_i(x,\omega_i) (n.\omega_i) \, d\omega_i~~~~~~~~~\mbox{(1)}$

It means that by summing the radiance ($W.m^{-2}.sr^{-1}$) coming from all possible directions, we get rid of the angular component (the $sr^{-1}$ part).

Irradiance is the energy per unit surface (when leaving the surface, the irradiance is then called radiosity, I suppose you've heard of it). It's not very useful because, as we saw earlier, what we need for our pixels is the radiance.

Intuitively, we can imagine that we need to multiply that quantity by a value that will yield back a radiance. This mysterious value has the units of per steradian ($sr^{-1}$) and it's indeed the BRDF.

### First try

So, perhaps we could include the BRDF in front of the irradiance integral and obtain a radiance like this:

$L_r(x,\omega_o) = f_r(x,\omega_o) \int_\Omega L_i(\omega_i) (n.\omega_i) \, d\omega_i$

Well, it can work for a few cases. For example, in the case of a perfectly diffuse reflector (Lambert model) then the BRDF is a simple constant $f_r(x,\omega_o) = \frac{\rho(x)}{\pi}$ where $\rho(x)$ is called the reflectance (or albedo) of the surface. The division by $\pi$ is here to account for our "per steradian" need.

This is okay as long as we don't want to model materials that behave in a more complex manner. Most materials certainly don't handle incoming radiance uniformly, without accounting for the incoming direction! They must redistribute radiance in some special and strange ways...

For example:

• Many materials have a specular peak: a strong reflection of photons that tend to bounce off the surface almost in the direction perfectly symmetrical to the incoming direction (your average mirror does that).
• Also, many rough materials imply a Fresnel peak: a strong reflection of photons that arrive at the surface with glancing angles (fabrics are a good example of Fresnel effect)

That makes us realize the BRDF actually needs to be inside the integral and become dependent on the incoming direction $\omega_i$ as well!

### The actual formulation

When we inject the BRDF into the integral, we obtain a new radiance:

$L_r(x,\omega_o) = \int_\Omega f_r(x,\omega_o,\omega_i) L_i(\omega_i) (n.\omega_i) \, d\omega_i~~~~~~~~~\mbox{(2)}$

We see that now $f_r(x,\omega_o,\omega_i)$ is dependent on both $\omega_i$ and $\omega_o$ and becomes much more difficult to handle than our simple Lambertian factor from earlier.

Anyway, we now integrate radiance multiplied by the BRDF. We saw from equation (1) that integrating without multiplying by the BRDF yields the irradiance, but when integrating with the multiplication by the BRDF, we obtain radiance so it's perfectly reasonable to assume that the expression of the BRDF is:

$f_r(x,\omega_o,\omega_i) = \frac{dL_r(x,\omega_o)}{dE_i(x,\omega_i)}~~~~~~~~~~~~\mbox{(which is simply radiance divided by irradiance)}$

From equation (1) we find that:

$dE_i(x,\omega_i) = L_i(x,\omega_i) (n.\omega_i) d\omega_i~~~~~~~~~~~~\mbox{(note that we simply removed the integral signs to get this)}$

We can then finally rewrite the true expression of the BRDF as:

$f_r(x,\omega_o,\omega_i) = \frac{dL_r(x,\omega_o)}{L_i(x,\omega_i) (n.\omega_i) d\omega_i}~~~~~~~~~\mbox{(3)}$


The BRDF can then be seen as the infinitesimal amount of reflected radiance ($W.m^{-2}.sr^{-1}$) by the infinitesimal amount of incoming irradiance ($W.m^{-2}$) and thus has the final units of $sr^{-1}$.

## Physically

To be physically plausible, the fundamental characteristics of a real material BRDF are:

• Positivity, any $f_r(x,\omega_o,\omega_i) \ge 0$
• Reciprocity (a.k.a. Helmholtz principle), guaranteeing the BRDF returns the same value if $\omega_o$ and $\omega_i$ are reversed (i.e. view is swapped with light). It means that $f_r(x,\omega_o,\omega_i) = f_r(x,\omega_i,\omega_o)$
• Energy conservation, guaranteeing the total amount of reflected light is less or equal to the amount of incoming light. In other terms: $\forall\omega_o \int_\Omega f_r(x,\omega_o,\omega_i) (n.\omega_i) \, d\omega_i \le 1$

Although positivity and reciprocity are usually quite easy to ensure in physical or analytical BRDF models, energy conservation on the other hand is the most difficult to enforce!

## BRDF Models

Before we delve into the mysteries of materials modeling, you should get yourself familiar with a very common change in variables introduced by Szymon Rusinkiewicz [4] in 1998.

The idea is to center the hemisphere of directions about the half vector $h=\frac{\omega_i+\omega_o}{\left \Vert \omega_i+\omega_o \right \|}$ as shown in the figure below:

This may seem daunting at first but it's quite easy to visualize with time: just imagine you're only dealing with the half vector and the incoming light vector (or the outgoing view vector, we don't care since they're interchangeable).

• The orientation of the half vector $h$ is given by 2 angles $\langle \phi_h,\theta_h \rangle$. These 2 angles tell us how to rotate the original hemisphere aligned on the surface's normal $n$ so that now the normal coincides with the half vector: they define $h$ as the new north pole.
• Finally, the direction of the incoming vector $\omega_i$ is given by 2 more angles $\langle \phi_d,\theta_d \rangle$ defined on the new hemisphere aligned on $h$.

Here's an attempt at a figure showing the change of variables:

We see that the inconvenience of this change is that, as soon as we get away from the normal direction, a part of the new hemisphere stands below the material's surface (represented by the yellow perimeter). It's especially true for grazing angles when $h$ is at 90° off of the $n$ axis: half of the hemisphere stands below the surface!

The main advantage though, is when the materials are isotropic then $\phi_h$ has no significance for the BRDF (all viewing azimuths yield the same value) so we need only account for 3 dimensions instead of 4, thus significantly reducing the amount of data to store!

### BRDF From Actual Materials

Before writing about analytical and artificial models, let's review the existing physical measurements of BRDF.

There are few existing databases of material BRDFs, we can think of the MIT CSAIL database containing a few anisotropic BRDF files but mainly, the most interesting database of isotropic BRDFs is the MERL database from Mitsubishi, containing 100 materials with many different characteristics (a.k.a. the MERL 100).

Source code is provided to read back the BRDF file format. Basically, each BRDF is 33MB and represents 90x90x180 RGB values stored as double precision floating point values (90*90*180*3*sizeof(double) = 34992000 = 33MB).

The 90x90x180 values represent the 3 dimensions of the BRDF table, each dimension being $\theta_h \in [0,\frac{\pi}{2}]$ the half-angle off from the normal to the surface, $\theta_d \in [0,\frac{\pi}{2}]$ and $\phi_d \in [0,\pi]$ the difference angles used to locate the incoming direction.

As discussed earlier, since we're considering isotropic materials, there is no need to store values in 4 dimensions and the $\phi_h$s can be safely ignored, thus saving a lot of room!

I wanted to speak of actual materials and especially of the Disney BRDF Viewer because they introduce a very interesting way of viewing the data present in the MERL BRDF tables.

Indeed, one way of viewing the 3D MERL table is to consider a stack of 180 slices (along $\phi_d$), each slice being 90x90 (along $\theta_d$ and $\theta_h$).

This is what the slices look like when we make $\phi_d$ change from 0 to 90°:

We can immediately notice the most interesting slice is the one at $\phi_d = \frac{\pi}{2}$. We also can assume the other slices are just a warping of this unique, characteristic slice but we'll come back to that later.

Another thing we notice with slices with $\phi_d \ne \frac{\pi}{2}$ are the black texels. Remember the change of variables we discussed earlier? I told you the inconvenient of this change is that part of the tilted hemisphere lies below the surface of the material. Well, these black texels represent directions that are below the surface. And we see it gets worse for $\phi_d = 0$ where almost half of the table contains invalid directions. And indeed, the MERL database's BRDF contain a lot (!!) of invalid data. In fact, 40% of the table is useless, which is a shame for files that each weigh 33MB. Some effort could have been made from the Mitsubishi team to create a compressed format that discards useless angles, saving us a lot of space and bandwidth... Anyway, we're very grateful these guys made their database public in the first place! File:S1.jpg

So, from now on we're going to ignore those slices at $\phi_d \ne \frac{\pi}{2}$ and only concentrate on the characteristic slice.

Here is what the "MERL 100" look like when viewing their characteristic slice:

Now let's have a closer look at one of these slices:

MERL database. Speak of Disney's interesting ThetaH / ThetaD images as material characteristics. Talk about spec/fresnel/diffuse/backlighting zones.

### Analytical models of BRDF

List from Disney's doc + generalized microfacet model + attempts of each model to recreate the characteristic zones from Disney's ThetaH / ThetaD images.

Lafortune lobes!

Some ideas and generalizations/ideas/explanations are worth mentioning.

From "Background: Physics and Math of Shading (Naty Hoffman)" I learned that:

• metals completely absorb photons if they are not reflected specularly: metals have no diffuse components.
• Moreover, metals usually have a colored specular (due to RGB variations in the Fresnel reflections) while dielectric materials have a uniform specular and need only luminance encoding.