Ray-object intersection by naively testing against all objects in the scene to find the closest intersection is slow, which gives O(n) linear running time, it sounds fine, but when we consider that we must test this for each pixel in an image it becomes incredibly slow even for the most simple of scenes.
Consider a image with a width of 800 and height of 600 pixels and a scene with 10000 geometric objects. We have to do in total 4.8 billion intersection tests.
To combat this situation we implement acceleration data structures enables us to quickly cull away any objects that cannot possibly intersect with the ray.
A bounding volume hierarchy is a acceleration data structure which places successively smaller bounds on groups of objects, the idea being that if it does not hit the larger bounding box, it cannot possibly hit the ones contained in it. Eventually we reach the leaf nodes, which contain the actual objects which we do intersection test against.
I implemented a naive bounding volume hierarchy algorithm, the reason is that it is simple to read and implement, but still provides significant speed up, later, of course when the need arises, an better implementation or another structure such as kd-trees can be explored.
I did some test with the Utah teapot and approximately doubling the amount of triangles and rendering each time with both methods.
Naive BVH (seconds)
Just Naive (seconds)
Extrapolating these values we can more readily see the difference.
Using multi sampling we estimate the colour of a pixel by sampling the scene with multiple rays through that pixel and average the colours. This provides a simple means of anti-aliasing.
I have implemented two sampling methods, random and stratified sampling.
With random sampling the points suffer from bunching or areas of emptiness within the unit square and may give an inaccurate estimate to the average pixel colour.
Stratified sampling provides a more uniform samples, by evenly dividing the unit square into and n by n sub squares and each sample generate is inside one of these sub squares. The simplest form is called regular sampling, which places a sample at the centre of each of the sub squares. With jittered sampling the samples are random, but still within each sub square.
Below shows the 16 samples within a unit square produced with each method.
I have implemented some commonly used textures. You can combine the textures to produce more complicated textures for example you can mask part of a texture and then have a gradient, bitmap or any other texture applied to those parts.
At the moment I have all textures return a <Colour> value. I wonder if there is a way to be able to include nodes that modify the texture like a gamma curve, apply some sort of filter etc., which can be added as a input to a texture.
Other things such as rotation, tiling in U and V directions can be added later to the base class <Texture>, currently rotation has only be implemented for the striped texture.
Below is an example of combining different textures to produce a more complex texture.
The mix texture works like an add between the first two textures and then a multiply of the mix texture with the first two. An example is shown in Photoshop of mixing red, and green, and gray. Note that the final colour is not (0.5, 0.5, 0) since gamma correction is applied to textures.
I worked on the OBJ loader to correctly import the normal vectors, texture coordinates, and vertices so that I can start exporting models from max to my renderer. Here are some comparative shots with a texture map applied, the uvw coordinates give us a way of mapping two dimensional coordinates u and v to the surface of the three dimensional model. The w coordinate equal to zero of course, only making red and green visible, and a mix (yellow/orange) across the surface. Lastly, we have the world space normals, which in this case is the same as the object space normals. It gives us an indication of the direction the polygon is facing and is perpendicular to the vertex. It is useful for lighting or smoothing the appearance of an object.
I have worked on being able to load textures/images and save them, original I used the *.ppm file format here, but I have switched from the magic number P3 (ASCII) representation to the representation with magic number P6 (binary) since Photoshop can only save out to this representation. I also had some headaches trying to parse this file format since there were some strange bugs that I could not figure out, but as it turns out it was something stupid, like forgetting to make sure all the white spaces were eaten up since there was a sneaky newline character produced when saving from Photoshop.
A camera system has been implemented so that it is now possible to specify different view points of a scene. I have chosen to match the camera system of 3ds max since this will serve as a ground truth to compare my renderings with since 3ds max is widely used in industry and has been around for the past 25 years.
The axes of the world coordinate system however differs from the one i have defined in my system as shown below. As it can be seen to convert to 3dsmax’s world coordinate system you simply rotate 90 degrees around the x-axis or equivalently negate the y-coordinate then swap the y and z coordinates.