In game scene, the number of objects can vary significantly depending on the type of game. Rendering all these objects simultaneously can severely impact performance. Thus, it’s crucial for most games to cull invisible objects before they’re passed to the rendering pipeline. Invisible objects typically fall into two categories: those outside the camera’s field of view, and those occluded by other objects, such as walls or terrain.
Determining if an object is within the camera’s sightline is fairly straightforward, yet ascertaining if it’s occluded is more challenging. Obstructions can be irregular, and occlusion can occur at the pixel level, making it difficult to resolve using simple geometric intersection calculations like those used in camera frustum culling. The principle behind solving this issue is akin to the depth test in rendering, where objects are rasterized into screen space for pixel-by-pixel depth comparison.
Presently, there are three primary methods to address this issue: hardware occlusion culling, software occlusion culling, and precomputation.
Hardware occlusion culling
Hardware Occlusion Queries
Graphics hardware APIs often feature Hardware Occlusion Queries, which can estimate the visibility of an object’s pixels during rendering. When activated, this…