Acerca de

Specialization
Volumetric Fog
Introduction
This was a two week specialization project where I got to decide and plan what I wanted to do on my own. Being inspired by Playdead's critically acclaimed title Inside, I decided that I was going to recreate the volumetric lighting effect used in that game. There are four main parts to achieving this effect. Mesh clipping, ray marching, dithering and temporal anti-aliasing. I will only be focusing on ray marching and dithering. Since I only had a total of two weeks to finish this project, I wanted to focus on the most important parts.
Ray marching
Ray marching in a pixel shader is the process of shooting several rays from a pixel's world position into the camera's direction and using each point as data to draw the pixel. To achieve volumetric lighting, we need to calculate the percentage of rays that end up in shadow. If we test each ray position against our shadow map and divide the number of rays in shadow with the total number of rays emitted, we get a percentage that we can now use to multiply the pixel color with. The more rays end up in shadow, the darker the pixel gets. This is what makes the effect look like fog.

I started by writing a very simple implementation that does not consider front face to back face sampling. Instead it is sampling every fragment a thousand times at a small step length of five units. A thousand samples is detrimental to performance as evidenced by the frame rate this produces, but there are ways to improve this.

Decreasing the step count to 50 and increasing the step length to a 100 units so that the total ray distance stays the same improves the frame rate drastically, but creates undesirable, circular banding. This is where we apply dithering to get rid of this banding.
Dithering

Dithering is the process of adding noise to get rid of banding. If we apply a random offset to the step length, we have essentially dithered the result. By giving each ray a different length, adjacent rays will check for shadows in at different distances, even if they are located next to each other. This is what breaks up the banding. This makes the image noisy, however, but the performance increase is apparent and this allows us to have a somewhat acceptable result that also runs much faster.
Improving performance and quality
There are still ways to improve the quality and performance of this. Those improvements are the two steps that I mentioned in the beginning of this article but were out of scope for this project. These were mesh clipping and temporal anti-aliasing. If we clipped the fog volume to the geometry of a spotlight's light cone, we could ray march at much smaller intervals and focus on just the light cone because that is the only place that we would need to render fog. We would not waste samples like my example is doing going through the entire volume. This would lead to even more of a performance increase and visual quality because we are using every sample in a small, focused space. The algorithm that Playdead used for this is a 3D variation of the
Sutherland-Hodgman algorithm.
The reason temporal anti-aliasing is so important for fog like this is because it has the unique property of smoothing out dithered noise. Like I mentioned, we introduced a lot of noise to the image when we got rid of the circular banding, which was a fair trade off, but if we applied TAA over that, it would start to almost sublimate the noise, making the image much cleaner.