## Bidirectional Path Tracing in Participating Media

The framework for integration over path space described by Veach [1] can be extended to integrate over volumes ([3] and [4]), using the original rendering equation as a *boundary condition*. An alternative framework was also developed by Lafortune in [2]. In the approaches described by these papers, scattering events are sampled according to the cumulative scattering density along a ray.

Here is some programmer art rendered using a brute force bidirectional path tracer that implements this approach, stopped at around 1000 samples per pixel.

Still noisy but working volume caustics, so achieves my goals. Since this is a brute force sampling (no MLT or manifolds) there are still issues with SDS paths that show up as unexpected bright spots on the glass sphere. If I manage to resurrect MLT code for [5] I’ll post an update.

This post is an attempt to gather together the definitions and equations for the *weighted* path contribution and MIS probability ratios for this sampling technique, as I’ve not seen this written down all in one place (with all of the interior terms of the weighted path contribution cancelled etc). Where possible I’ve tried to use similar notation to [1], [3] and [4]. This post is unapologetically *maths-heavy*!

## Multiple Scattering

Experimenting with volume rendering and multiple scattering has been on my todo list for a long time, so recently I started trying to understand how scattering events fit into a bidirectional path tracer. I found surprisingly few references that deal with volume scattering in the Veach framework (or more accurately, extend the Veach framework to deal with this). Most useful to me so far were:

- Metropolis Light Transport for Participating Media by Mark Pauly, Thomas Kollig, Alexander Keller (EG 2000).
- Manifold Exploration: A Markov Chain Monte Carlo technique for rendering scenes with difficult specular transport by Wenzel Jakob, Steve Marschner (to appear in SIGGRAPH 2012).

It was quite quick to add support for vertices that represent scattering events, but it certainly wasn’t trivial to calculate pdf ratios for multiple importance sampling. I’m going to do a proper writeup of all the equations in the next post. For now, here are two quick test renders:

Image statistics: bidirectional path tracing with multiple importance sampling, multiple scattering in (unbounded) isotropic medium with isotropic phase function, max path length 16, approximately 500spp.

Once I get volume caustics working I will have a small party. And then get back to finishing the VPL post from April…

## Bidirectional Instant Radiosity

Bidirectional Instant Radiosity is the title of a paper by B. Segovia et al which presented a new sampling strategy to find virtual point lights (VPLs) that are relevant to the camera. The algorithm given for generating VPLs is:

- Generate \(N/2\) “standard” VPLs by sampling light paths (i.e. vanilla instant radiosity)
- Generate \(N/2\) “reverse” VPLs by sampling eye paths (compute their radiance using the N/2 standard VPLs)

These \(N\) VPLs are then resampled into \(N^\prime\) VPLs by considering their estimated contribution to the camera. Finally the \(N^\prime\) resampled VPLs are used to render an image of the scene.

In this post I’ll describe how I think this approach can be generalised to generate VPLs using all the path construction techniques of a bidirectional path tracer. As usual I’m going to assume the reader is familiar with bidirectional path tracing in the Veach framework.

I should state that this is an *unfinished* investigation into VPL sampling. I’m going to describe the core idea and formally define the VPL “virtual sensor”, but proper analysis of the results will be part of a future post (and may well indicate that this approach is not advisable).

## Sampling Sun And Sky

In this post I will briefly cover how I implemented sampling of external light sources in a path tracing framework, concluding with an observation about sampling multiple external light sources that are non-zero over very different solid angles. I’m going to assume the reader is familiar with path tracing in the Veach framework.

My definition of an *external* light source, which I’ve also seen called an “infinite” light source since they are considered to be infinitely far away (and infinitely bright as a result), is as follows:

- Radiance always originates from outside of the scene bounds
- Radiance is a function of world space direction only (not sample position)

A simple example would be a cube map considered to be always centered at the sample point.

## SketchUp Cities

Ray Tracey’s latest blog post has Brigade 2 renders of a nice-looking walled city scene created using Google SketchUp. The model came from this gem of a collection by “LordGood” (who evidently is a big Assassin’s Creed fan) hosted on Google 3D Warehouse.

Currently I only have a Blender exporter, and sadly the SketchUp-Collada-Blender path was producing garbage, but even the free version of SketchUp allows custom ruby plugins. After a bit of hunting around I found this OBJ exporter ruby plugin which worked very well, and now I have much nicer test meshes than my bad Blender programmer art.

The above images are from my usually-being-refactored path tracer with Preetham (et al) sun/sky. It doesn’t render as quickly as Brigade 2 (also I only have a lowly GTX 460 to render on), and yes it’s all diffuse and I haven’t exported any of the textures, and there’s no atmospheric terms or depth of field or shading normals or remote-controlled Stanford bunny, but it’s nice to have some decent public domain data to use.

I’m slowly working on a Bidirectional Instant Radiosity post (hopefully using this scene) but it’ll have to wait until work is less mental.

## Virtual Point Light Bias Compensation

Nothing new here, just highlighting the 2004 paper Illumination in the Presence of Weak Singularities by Kollig and Keller. This paper presents a simple and elegant solution to the problem of clamping the geometry term when using virtual point lights for illumination.

## Projected Solid Angle Is Projected

There are two quantities flying around when writing a physically based renderer:

- Irradiance, which is power per unit area
- Radiance, which is power per unit area per unit
**projected**solid angle

Any BSDF is the ratio between the two: a patch receives unit irradiance from some incident direction, the BSDF defines the radiance emitted for each outgoing direction.

At no point is (non-projected) solid angle used as a measure for the patch in question, yet many references and implementations consider probability density relative to this measure. In this post I will argue that using **projected** solid angle directly is more natural.

## Hybrid Bidirectional Path Tracing

I’d like to share my results from converting a CPU-only bidirectional path tracer into a CPU/GPU hybrid (CPU used for shading and sampling, GPU used for ray intersections). These results are a bit old… I posted them a while ago as a thread on ompf. I found out later that this thread had been cited in Combinatorial Bidirectional Path-Tracing for Efficient Hybrid CPU/GPU Rendering, so let me summarise it here.

## Now You’re Lighting With Portals

I hate dome lights. You always waste a ton of rays that are occluded by geometry, and the situation gets even worse when lighting indoor scenes with exterior dome lights!

So why not help your renderer out and place portals that, when hit, teleport to the dome light. Then instead of sampling the whole skydome, we just sample the portals, and avoid sending rays where we know they will be occluded.

As an example, here’s the Sponza scene using an exterior (uniform) dome light, rendered using unidirectional path tracing with multiple importance sampling:

Lots of rays never manage to find the open roof, so we get plenty of noise. Now let’s replace the dome light with a portal that covers the open roof, then allow that to be sampled instead:

Noise is greatly reduced, for exactly the same number of rays.

The sampling algorithm is simple enough to implement in your GPU path tracer of choice: sample the portal and use the usual conversion between pdf wrt area (the portal) and pdf wrt solid angle (the dome):

\[P_\sigma = \frac{P_A \|\mathbf{v}\|^2}{\cos(\theta)} = \frac{P_A \|\mathbf{v}\|^3}{\mathbf{v}.\mathbf{n}}\]

Where **v** is the vector between target point and the portal point, and **n** is the portal normal.

## Two-Way Path Tracing

This post is about a path tracing technique that sits between unidirectional path tracing and bidirectional path tracing.

For want of a better name, let’s call this **two-way path tracing**. It’s defined as follows:

- Trace eye rays, handle light source intersections and sample light sources explicitly
- Trace light rays, handle sensor intersections and sample sensors explicitly
- When computing weights for multiple importance sampling, take both tracing methods into account

So you can think of this technique as either:

- Unidirectional path tracing in both directions at once
- Bidirectional path tracing, but we only connect sub-paths if one of the sub-paths has one vertex

So why is this interesting? Because:

- Like unidirectional path tracing, you only need to track a fixed amount of state, regardless of maximum path length. This is potentially nice for GPU implementations where you usually want to avoid hitting memory and have a large number of paths in flight.
- You can efficiently multiple importance sample between forward and reverse paths, so you can get reduced variance compared to unidirectional path tracing for some types of scenes (e.g. caustics).

In this post I’d like to cover how to multiple importance sample between forward and reverse paths, and show some test images.