This post is about a path tracing technique that sits between unidirectional path tracing and bidirectional path tracing.
For want of a better name, let’s call this two-way path tracing. It’s defined as follows:
- Trace eye rays, handle light source intersections and sample light sources explicitly
- Trace light rays, handle sensor intersections and sample sensors explicitly
- When computing weights for multiple importance sampling, take both tracing methods into account
So you can think of this technique as either:
- Unidirectional path tracing in both directions at once
- Bidirectional path tracing, but we only connect sub-paths if one of the sub-paths has one vertex
So why is this interesting? Because:
- Like unidirectional path tracing, you only need to track a fixed amount of state, regardless of maximum path length. This is potentially nice for GPU implementations where you usually want to avoid hitting memory and have a large number of paths in flight.
- You can efficiently multiple importance sample between forward and reverse paths, so you can get reduced variance compared to unidirectional path tracing for some types of scenes (e.g. caustics).
In this post I’d like to cover how to multiple importance sample between forward and reverse paths, and show some test images.
Multiple Importance Sampling
To recap Eric Veach’s thesis on bidirectional path tracing, if you have a path of (k+1) vertices defined by:
and you wish to compute the ratio of probability densities
for each of the
$(s + t + 1)$ possible sampling techniques, then these are defined by Veach as follows:
These ratios allow us to weight different sampling techniques to reduce variance. For example, here is an image of the cornell box, but with one of the cubes replaced with a glass sphere.
Here is a debug image (click for full size) showing all the techniques used to compute the image when using bidirectional path tracing, weighted using these probability ratios (in this image, t increases from left to right, s+t increases from top to bottom):
Note that the left-hand image in each row is always zero, since we are rendering using a pinhole camera that cannot be hit by light sub-paths.
In two-way path tracing, we restrict ourselves to techniques where s < 2 or t < 2. If we adjust the bidirectional path tracing weights accordingly (using the same ratios) we get the following debug image:
To compute these weights, we only need to compute the following reduced set of ratios, regardless of path length:
The first and last are defined already, so let’s try to simplify the middle term from the original definition:
As we can see, all the interior geometry terms cancelled!
This product can be computed iteratively as the path is grown.
This leads to an efficient algorithm for building up the ratio of
$p_k/p_1$ by tracking a single scalar value as the path is constructed.
Let’s start with a toy scene of diffuse, mirror or glass spheres, lit by a small sphere light. This is the scene rendered using unidirectional path tracing with 128 paths per pixel:
It’s pretty noisy due to caustic paths being poorly sampled by path tracing (since the light source is small and unlikely to be hit). Now let’s look at the same scene rendered using unidirectional light tracing for 128 paths per pixel:
Since we are using light tracing, caustic paths are well sampled, but directly viewed specular surfaces cannot be sampled at all!
What two-way path tracing allows us to do is combine these two techniques with multiple importance sampling, with much less overhead than full bidirectional path tracing. Here’s the same scene rendered using 64 two-way paths per pixel (so the same total number of paths as the previous images):
As we can see, most of the image is now cleaned up. However, you will notice that surfaces viewed indirectly through specular reflection/refraction do not benefit. This is because these surfaces are still only sampled with unidirectional path tracing. In fact, for many types of path (e.g. LSDSE paths), even moving to full bidirectional path tracing will not help since there will still only be a single technique that generates the path (hitting the light source), and you need to move to Metropolis techniques to reduce the variance.
Now let’s see two-way path tracing on a more complex scene. Here’s an image from my work-in-progress GPU two-way path tracer (this scene was inspired by Dietger’s awesome MLT scene on ompf):
The floors are diffuse, and the buddha, dragon and bunny are phong, mirror and glass respectively (although glass doesn’t yet use Beer’s law properly). The scene is lit by a small quad light.
Here’s an alternate angle to zoom in on some caustic paths:
I’ve not been very scientific, but here’s the same angle with just eye paths (i.e. equivalent to unidirectional path tracing). Note the noisy caustics and fireflies around the dragon from specular paths.
Here’s the same angle with just light paths, to show what light paths give in terms of additional sampling.
Conclusions and Future Work
I’ve shown how to efficiently multiple importance sample between unidirectional path tracing and light tracing by building a single scalar value for the ratio
$p_k/p_1$ as the path is constructed.
This can help to improve convergence when light tracing is a good sampling technique for the scene, such as directly viewed caustics.
For future work, I’d like to combine this technique with Metropolis sampling, to better explore (forward and reverse) path space for more complex scenes.
I would also expect this technique to work well when sensor can be intersected, since this is one of the light tracing techniques that we can now multiple importance sample with path tracing. If we use the cornell box with sphere scene above, but make the floor a lightmap, we get the following output:
If we render this with two-way path tracing, we get the following debug image, showing that all 4 techniques for each path length now produce non-zero samples: