Rendering Algorithms

Introduction

Over the summer, I came across a computer graphics course from Dartmouth, CS 87/287 – Rendering Algorithms (https://cs87-dartmouth.github.io/Fall2022/). The professor posted all the lectures and assignments online. The class is usually taught over 10 weeks and also includes a final project. The aim of the class is to teach modern rendering techniques by building a ray tracer. In my free time, I worked through the assignments and readings. I relied on 3 resources, Peter Shirley’s Ray Tracing series, Physically Based Rendering: From theory to Implementation, and course lecture slides. The professor provides a skeleton code base with non-core functionality, such as parsing .obj and .json files, already implemented. Each assignment builds on the previous. Unfortunately, I am unable to share my code publicly since doing so would permit others to cheat. Also, part of each assignment was writing a report detailing results. I will provide a GitHub pages link to each of those reports for those interested in more detail. Throughout this post I’ve included renders all of which were generated using my implementation of the dart renderer.

Assignment 1 – Fundamentals

The first assignment focuses on the fundamentals. Overall, there are 4 main subtasks. Which are implementing a camera model, transformation using matrices, intersection testing for a sphere, and basic material models.

A thin lens model is used for the camera. This basically means that the renderer acts like a pin hole camera, and does not simulate any of the more complex lens effects that occur in compound lens. Though it is possible though with this code framework to explore more complex camera models in the future.

I won’t discuss matrix transformations or intersection testing for a sphere. I have already discussed these topics in more detail in previous project post. Though I would like to briefly discuss intersection testing for triangles. Since all models are composed of thousands, if not millions, of triangle primitives its important to have an extremely efficient method of intersection testing. The professor recommends the Möller-Trumbore algorithm (https://en.wikipedia.org/wiki/M%C3%B6ller%E2%80%93Trumbore_intersection_algorithm). It breaks intersection testing into two parts. First, check if the ray-direction is parallel to the triangle plane. Second, if they are not parallel check, using barycentric coordinates, if the intersection lies within the triangle. There are some more advanced mathematical concepts at play as well, but that is the basic idea, as I understand it.

Three material modes Lambertian, dielectric and metal are implemented for this assignment. The implementations of these materials come from Ray Tracing in One Weekend chapters 8-9. Later more complex materials implementations are discussed. For now, the important point is to use these materials to understand and check recursive ray generation.

Lambertian (left) and metal (right)
Jensen box scene with metal (left) and dielectric (right) spheres
The classic Sherley scene from Ray Tracing in One Weekend

Report assignment 01:

Assignment 2 – Structuring Scene Data with BVH

This assignment focuses on optimizing intersection testing. The most computationally time intensive task preformed by any ray tracer is scene intersection testing. Each time a ray is generated we must determine which primitives in the scene the ray first intersects, know as the closest intersection. If approached naively this would mean testing each scene primitive for an intersection. Most ray tracers will generate hundreds of thousands to millions of rays and most modern scenes are composed of millions of primitives’ this will in turn lead to billions if not trillions of intersection test.

This is where a concept known as bounding volume hierarchy (BVH) is applied. By utilizing a BVH, scenes that would take days to render now only take seconds or minutes. A BVH allows scene data to be structured into a tree, on which a binary search can be preformed to determine the closest intersection. The basic idea is to encapsulate geometry into separate bounding boxes, then preform intersection tests on those boxes. If a ray does not intersect a given box, then we know that the ray also does not intersect any geometry inside of that box, and all those intersections test can be skipped.

There is a ton of detail that can dramatically impact the performance of a BVH. For instance, choosing an appropriate splitting method between two nodes at the same level in the tree is crucial. If one of the nodes contains all of the geometry except for say a single primitive there is not going to be any significant performance gain. Since any intersection with that box would necessitate testing every primitive except the single one excluded from that node.

One of the down sides of a BVH is that construction time is required in advance of intersection testing. In some cases construction time can be equal if not greater than the time it takes to render. There are steps one can take to improve construction times, such as using more efficient search algorithms or parallelizing the process. Also BVH can be cached. They can be calculated once then reused multiple times

A BVH is not the only way to structure scene data. For instance, a structure known as a kd-tree organizes the data spatially and can in some instances out preform a BVH structure.

Model of Ajax bust is composed of millions of triangles. Rendering times with BVH around 5 seconds.

Report assignment 02:

Assignment 3 – Textures

Up until now material properties have been specified by a single color value. This is extremely limiting. Real life materials are composed of a variety of tint, tones, and shades. In this assignment a texture class interface is defined that allows for more complex material properties.

The first of these texture subclasses is a solid texture. The solid texture replaces the color structure. Now instead of using a “Color3f” type to define a solid color the texture class is used. This allows us to define colors while still maintain the structure provided by the texture class architecture.

A checkerboard texture is defined next. By looking up the coordinate position of an intersection a determination is made between two texture values using a bit of trigonometry. The algorithm used in this assignment came from Ray Tracing: The Next Week chapter 4.

Procedural marble and checkerboard textures

Next, is the noise texture class. Noise patterns are extremely useful. They are able represent a number of natural materials, such as wood and marble. In this assignment, the Perlin noise algorithm is used in implementation. The algorithm generates pseudorandom noise using a hash table . The hash function utilizes coordinate positions to index an array of randomly generated color values. As, one can see in the first image below this generates high frequency and blocky noise. To reduce high frequency noise linear interpolation is applied to soften/blur the noise, and further smeared, for lack of a better word, by a turbulence algorithm. By adjusting parameters for turbulence and linear interpolation a variety of noise patterns can be generated.

Finally, there is texture mapping. It is kind of similar to Perlin noise in the sense that coordinate positions are once again being used to look up a color value, the big difference is one that values are stored 2D image and not an procedurally generated array. To map 2D image to 3D objects a UV map needs to be generated that specifies this transformation. For spheres this mapping requires the application of spherical coordinate system, and for planes it is a simple matter of mapping XY-coordinates to UV space. For more complex surfaces, the UV coordinate of each vertex needs to be defined using a 3rd party modeling application, such as blender.

UV mapping

Report assignment 03:

Assignment 4 – Material Sampling

The final two assignment focus on more advanced light sampling concepts. Up until now a fairly naïve approach has been used to scatter light from materials and to generate secondary rays. For instance, the Lambertian material generates scatter rays by choosing a random point on the surface unit sphere. Ultimately, this leads to slow convergence, particularly for scenes with small light sources, resulting in high variance, or noise. The naïve approaches would be to simply increases the samples per pixel or the number of ray bounces. However, this will lead to exponentially increasing render times for diminishing returns. The goal is to convert the current renderer to utilize Monte Carlo integration to find the total radiance at a given point.

The first step is to implement a variety of sampling methods and their associated probability distributions (pdf). Below are some visualization of different sampling methods. On the left is uniform distribution on a hemi-sphere and on the right is cosine-power distribution on a hemisphere. As the cosine-power value increases the sample points trend towards the surface normal. This will be useful when defining the distribution of a Phong material which can have a variety of roughness values.

Next the material models needs to be refactored to accommodate this new sampling method. Lambertian materials apply uniform hemi-sphere sampling since it is a diffuse material and therefore light has a uniform probability of scattering in any given direction above the surface. A new material class called Phong is created that applies the cosine-power distribution. The Phong material attempts to model glossy reflections. The Phong material tries to approximate surface roughness by “blurring” the reflection direction about the perfect mirror direction. The amount of spread can be controlled by a parameter to model different amounts of roughness of the surface.

Each sphere has a Phong material applied. From left to right the roughness value increases.

The final step is to generalize the main render loop so that variety of sampling methods can be applied. To test the new generalized render loop two concrete integrator classes, normals and ambient occlusion (AO) are created. The normals integrator simply shows the normal direction of each primitive. While the AO integrator assumes that a (diffuse) surface receives uniform illumination from all directions (similar to the conditions inside a light box), and that visibility is the only effect that matters.

Lastly, there is the material sampling integrator the encapsulates the path tracing algorithm used in all the previous assignment. In the next assignment, this encapsulation will allow us to experiment with other light sampling methods. As you can see in the image below material sampling produces high variance. This is because the probability of any given light path reaching a light source decreases as the material roughness increases and size of the light source decreases.

material based light distribution. Demonstrates variance that arises with varying light sources sizes and materials roughness.

Report assignment 04:

Assignment 5 – Light sampling

Now with the infrastructure to support Monte Carlo and a variety of integration methods, its time to implement light sampling. As demonstrated in the previous assignment material sampling is ineffective, and produce high variance, when scenes have small light sources and the probability of a light path intersecting that light source is low. This is where light sampling proves useful. Next Event Estimation (NEE) is such a method for direct light sampling. At every intersection along a path a ray is cast towards a light source. The returned luminance value is then down weighted by the probability of a random ray being cast along that direct path. This method works well and ensures that luminance from an emitter is accounted for at every vertex along a light path. This works well for diffuse materials

In cases where both diffuse materials and reflective materials are present neither material or direct light sampling produces optimal performance with low variance. A method known as Multiple Importance Sampling (MIS) is able to capture the best of both. In my implementation at each vertex along a light path either the material or light emitters were sampled. The pdf was then a combination of both methods individual pdfs. In a sense it produces an average of the two methods.

In the scene rendered directly below only direct lighting is accounted for. In the Jensen box scene full path tracing, using NEE and MIS integration methods, is implemented accounting for direct and indirect light paths. The purpose of these scenes is to demonstrate the variance produced by the different methods described previously.

Report assignment 05:

Closing

Even with all the assignments done there are numerous additional features that could be explored. One major topic I haven’t touched is photon mapping. Also, there are smaller topics like exploring more advance sampling methods and camera models. Its likely I will be working on yet another ray tracing project in the near future.