Ray-object intersection by naively testing against all objects in the scene to find the closest intersection is slow, which gives O(n) linear running time, it sounds fine, but when we consider that we must test this for each pixel in an image it becomes incredibly slow even for the most simple of scenes.

Consider a image with a width of 800 and height of 600 pixels and a scene with 10000 geometric objects. We have to do in total 4.8 billion intersection tests.

To combat this situation we implement acceleration data structures enables us to quickly cull away any objects that cannot possibly intersect with the ray.

A bounding volume hierarchy is a acceleration data structure which places successively smaller bounds on groups of objects, the idea being that if it does not hit the larger bounding box, it cannot possibly hit the ones contained in it. Eventually we reach the leaf nodes, which contain the actual objects which we do intersection test against.

I implemented a naive bounding volume hierarchy algorithm, the reason is that it is simple to read and implement, but still provides significant speed up, later, of course when the need arises, an better implementation or another structure such as kd-trees can be explored.

I did some test with the Utah teapot and approximately doubling the amount of triangles and rendering each time with both methods.

Triangle count | Naive BVH (seconds) | Just Naive (seconds) |

64 | 4.12 | 4.88 |

256 | 4.70 | 10.88 |

576 | 5.51 | 28.39 |

1024 | 5.07 | 41.22 |

2304 | 6.60 | 93.26 |

4096 | 6.55 | 160.20 |

Extrapolating these values we can more readily see the difference.