Home Store Films Submissions

A Primer to Methods in Computer Gaming Graphics
by Mark Barry

Part II (see Part I)

Relief Mapping.

An emerging technique that has not yet been implemented in computer games is relief mapping, spearheaded by Policarpo et al [12]. Like normal mapping and parallax mapping, relief mapping aims to solve the problem of realistically representing surface perturbations on flat surfaces. The results are quite impressive with several advantages over normal and parallax mapping, such as self-occlusion, interpenetrations, and self-shadowing. Along with solving the parallax problem beautifully, bumps on a relief map may occlude each other, which usually occurs when viewed at steep angles. An interesting property of relief maps is that other objects may intersect the surface (which is still flat!) but appear to properly intersect with the bumps and perturbations, properly occluding parts of both the surface itself and the other object. This is a unique object interaction that the other aforementioned techniques do not exhibit. Another ability of relief mapping not mentioned previously is that of being able to cast shadows on itself. Because the relief map models physical bumps on a surface, those bumps occlude light, producing a shadow. Fast and accurate shadowing has been a challenging problem in gaming graphics, so to see relief maps accurately render shadows is very impressive and of course adds to its realism. But again, when viewed from steep angles, it becomes apparent that relief maps are still only a painting on a flat surface. The virtual bumps on the surface should be silhouetted against the screen background but are not. Instead, a flat, straight horizon remains. The authors were able to render relief maps in real-time, making them suitable for games. Though because of the high-end hardware they used for their simple example, the question remains of how prevalent relief maps will be in large gaming environments. As one can imagine, relief mapping all the surfaces in a game would be quite computationally intensive. Though with graphics accelerated hardware becoming more powerful each year, no doubt some form of relief mapping will be seen in games soon to come.

comparison.jpg (26686 bytes)
Figure 2: Bump mapping (left), Parallax mapping (center), Relief mapping (right). [12]

Environment Mapping

The various forms of “mapping” discussed thus far act as paint applied directly to a mesh surface. A problem arises when surfaces are modeled with reflectance where the appearance of the surface is not only dependent on what is painted on it but also the environment around it. What kind of texture map would one apply to a mirror? This is a tricky problem because the surface appearance not only depends on the appearance of objects in the surrounding environment, but also on the surface’s orientation and the viewer’s orientation. In games, as the player moves around a reflective surface, the “texture” should appear to change. And of course the change should be convincing based on the environment. Fortunately, a fairly simple solution has been found to this problem. Blinn and Newell [2] present a new method of texture mapping now known as environment mapping. Existing texture mapping at the time parameterized a surface with simple Cartesian coordinates but environment mapping parameterizes a surface with polar coordinates. When the reflective object is rendered, the Cartesian coordinates of a point are converted to polar coordinates consisting of an azimuthal angle and a polar angle that map into a 2D texture image. Usually the azimuthal angle is plotted as the abscissa and the polar angle plotted as the ordinate.

Modern gamming graphics use a similar idea referred to as a cube map or skybox [11]. Instead of converting a 3D Cartesian coordinate to polar form, the coordinate is instead scaled to map onto the surface of a cube. The cube is like a bounding box or “world” surrounding the reflective object. The cube consists of six faces with a texture associated with each face. Special “3D texture” functions used in graphics programming will take in a 3D Cartesian point (on the object’s surface) and return the corresponding texture pixel for that point. Cube mapping is supported on modern graphics accelerated hardware with the capability of real-time rendering. Games also use cube/environment mapping usually to simulate sky and distant objects. Just as a cube map surrounds a reflective object, a cube map surrounds a player and other game geometry. In this use, it is known as a skybox. Often, if game designers as not careful, the texture seams (edges) of the skybox cube map won’t line up well with each other, creating a visible discontinuity (in clouds in the sky for example).

In games, the skybox cube map usually remains static and if other objects in the game pass near a reflective object, they will not show up in the reflection. The object passing by would have to be part of the skybox cube map texture in order to be visible on the reflective surface. In order to solve this problem, each face of the cube map needs to be updated based on the appearance of the entire scene. This means rendering the scene from six different angles corresponding to the six faces of the cube map. Once the cube map is updated in this way, the reflective surface will be environment mapped properly. Of course rendering the scene six times to produce an accurate cube map in order to render the full scene once, all in one sixtieth of a second is very computationally intensive is not usually done in games at this time.

wpe4E.jpg (25663 bytes)
Figure 3: Example of environment mapping

Alpha Blending

Another interesting problem is that of transparent textures. How would one go about texture mapping glass for example? In the example of modeling a window into a wall (the wall requiring only a single polygon), it would take many more polygons to build around a window opening. Porter and Duff [13] introduced a solution to this problem where they created a new “color channel” for images. As standard images consist of three color channels (red, green, and blue) they proposed a new “alpha” channel to represent transparency. The alpha value ranges from 0 to 1 where 0 represents a fully transparent color and 1 represents a fully opaque color. Though alpha compositing or alpha blending is a powerful technique for combining and blending multiple images, it has a simple application to computer games. Instead of simple RGB textures, game textures now include RGBA textures. When a textured pixel is rendered, the alpha value is checked, and if the texture is transparent, the output pixel is left unchanged. This way objects in the background that would have otherwise been occluded by the foreground object will show through. Similarly, if an alpha value is somewhere between 0 and 1, the textured pixel is blended with the existing background pixel. Alpha blending is a good and fast approximation to rendering transparent textures such as glass. Transparency is also useful for other optimizations such as modeling a chain link fence where only the wire is meant to be visible. Otherwise, in this example, modeling the wire with a large number of polygons would be terribly inefficient. Alpha blending is also often used to simulate the effect of smoke or particles in the air because of their semitransparent appearance. The limitation of alpha blending, such as in the case of modeling glass, is that it doesn’t model the underlying optics associated with how glass refracts light. Overall, alpha blending is an excellent addition to texture mapping that can be done quickly in graphics hardware and yields desirable results.


For every frame in game animation, every textured pixel must be taken from an image known as the texture map. Most of the time, the dimension of pixels on the screen does not match the dimension of pixels in the texture map. When this occurs, some sort of scaling must take place. For example, the screen area to be displayed may be twice as large as the texture used to map there. In this case, there is not enough texture information to “fill in the gaps”. To fill in the gaps, the color values of pixels are interpolated to generate an approximate value in between. The appearance of this is rather poor when small textures are used to cover large objects in games. The textures look smeared and stretched in this case. But the opposite of this problem can occur when large textures are mapped into small screen areas. In this scenario, only certain pixels are chosen from the texture map. As the textured screen area increases, for instance, a new mapping of texture pixels must be chosen, different from what was previously displayed. These abrupt changes between rendered frames often cause visual discontinuities or “popping”. In a single rendered frame, the texture may appear jagged or “pixilated” which is a result of aliasing. Williams [17] devised a technique, today known as mipmapping, which solves this problem and also makes for faster rendering. Mipmapping takes a texture image and scales it down using a high quality filtering algorithm. The sized-down image is then free of the visual aliasing effect. Successively smaller images are generated to create the various levels of detail. Simply put, the smaller mipmaps will be used to texture smaller regions on the screen. As well as making small areas appear much better, the computation needed to resize and filter in real-time is eliminated. The downside to using mipmaps is the extra memory required for the storage of scaled copies of texture images. But because of mipmaps’ importance and their relative ease of use, the technique is integrated into popular graphics programming APIs such as OpenGL and DirectX.

mip_mapping.jpg (23335 bytes)
Figure 4: No mipmapping (left), mipmapping (right).

High Dynamic Range Rendering

A few of the most recent 3D games demonstrate high dynamic range (HDR) rendering. Because of the computational power required, only the latest graphics accelerated hardware support HDR rendering. A limitation in computer imaging is that colors are represented with a fixed number of bits. This is usually 8 bits per color channel, which translates to 256 possible shades per channel. This limits the brightness (or “dynamic range”) a color value can represent. In the real world, brightness levels go way beyond this limit. When a very bright light is rendered on screen, it appears washed out and is displayed as all white (a value of 255 for each color channel) which is a clipped value of what the luminance should actually be. In the process of rendering a full scene, the rays of this very bright light illuminate other surfaces. If the value of the light is maxed out at 255, then other surfaces can only be illuminated by that much. In the real world, the bright light would illuminate other surfaces with a much higher value. High dynamic range lighting in real life also has an interesting effect on the human eye. The best example is leaving a dark room to go outside on a sunny day. The human eye has not adjusted to the outside brightness and as such everything appears very washed-out momentarily. Slowly the eye adjusts and the proper contrast is restored. This phenomenon is being successfully modeled into newer computer games to better model the way humans perceive the real world. [16]

BSP Trees and Octrees

BSP and octrees are similar ideas that are used to speed up the rendering of frames in computer games. These tree data structures divide up space into discrete regions, which can be handled efficiently. For example, a single theoretical cube can be placed to encompass all the objects in a scene. In the case of octrees, the bounding cube is recursively subdivided until all the objects are separated into their own, smaller, cubes. A tree data structure is generated that describes the location of an object, for example, as the child of the child of the parent cube. Representing objects in an octree data structure makes location lookup very fast. Also, any cube subdivisions that contain only empty space and no objects may be condensed into a single, larger, cube subdivision. Therefore all the children of this single cube do not need to be considered during a search. In games, BSP and octrees are used to cull out unseen areas of the world. This is called view frustum culling. The player’s viewable area may only include objects from a few neighboring “cubes” which will be rendered, but those that are unseen may be collectively merged into one parent cube and completely discarded. This concept is also very useful for collision detection – a major aspect of modern games. Objects near the player should be considered but objects far away may be collectively discarded from consideration of possible collision. [6]

Future Considerations

Gaming graphics are getting so complex and realistic that it will no longer be the limiting factor in making game environments visually convincing. With high-resolution texture mapping, normal mapping, complex lighting effects, and other realistic models for the real world, the quality of gaming graphics is quite excellent. What becomes noticeably lacking is the realism of animation – the way objects and characters move. Animating humans and animals is extremely challenging to do convincingly. The reason why character animation in games is difficult is because it needs to be interactive. Game players want to interact with other computer-driven players on a level comparable with other humans. This often generates movements and actions that cannot all be saved in a library of pre-animated movements. The emerging area of new work and research essential to the future of games is that of high quality character animation and interaction. [15]


Computer gaming is an area that has taken the clever tricks and efficient algorithms from the years of graphics research and applied them to a very practical application. Modern games need to be interactive, fast, and look realistic, and as such the underlying algorithms and data structures must be streamlined. As seen from the various references, many of the key graphics techniques have been developed some thirty years ago. Though much of the research may seem old, these core concepts are inherent to 3D computer games and have been time-tested and proved. Many of the simple techniques developed since the dawn of computers will always remain as the fundamentals of computer graphics.



1. Blinn, J. F. 1978. Simulation of wrinkled surfaces. In Proceedings of the 5th Annual Conference on Computer Graphics and interactive Techniques (August 23 -25, 1978). SIGGRAPH '78. ACM Press, New York, NY, 286-292.

2. Blinn, J. F. and Newell, M. E. 1976. Texture and reflection in computer generated images. Commun. ACM 19, 10 (Oct. 1976), 542-547.

3. Catmull, E. 1974. A Subdivision Algorithm for Computer Display of Curved Surfaces. PhD thesis, University of Utah, Salt Lake City, Utah.

4. Eck, M., DeRose, T., Duchamp, T., Hoppe, H., Lounsbery, M., and Stuetzle, W. 1995. Multiresolution analysis of arbitrary meshes. In Proceedings of the 22nd Annual Conference on Computer Graphics and interactive Techniques S. G. Mair and R. Cook, Eds. SIGGRAPH '95. ACM Press, New York, NY, 173-182.

5. Fisher, S. S., Fraser, G., and Kim, A. J. 1998. Real-time interactive graphics in computer gaming. SIGGRAPH Comput. Graph. 32, 2 (May. 1998), 15-19.

6. Fuchs, H., Kedem, Z. M., and Naylor, B. F. 1980. On visible surface generation by a priori tree structures. In Proceedings of the 7th Annual Conference on Computer Graphics and interactive Techniques (Seattle, Washington, United States, July 14 18, 1980). SIGGRAPH '80. ACM Press, New York, NY, 124-133.

7. GeForce 7800. http://www.nvidia.com/page/geforce_7800.html

8. Hoppe, H. 1996. Progressive meshes. In Proceedings of the 23rd Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '96. ACM Press, New York, NY, 99-108.

9. Kaneko, T., Takahei, T., Inami, M., Kawakami, N., Yanagida, Y., Maeda, T., and Tachi:, S. 2001. Detailed shape representation with parallax mapping. In Proceedings of the ICAT 2001, 205–208.

10. Lounsbery, M., DeRose, T. D., and Warren, J. 1997. Multiresolution analysis for surfaces of arbitrary topological type. ACM Trans. Graph. 16, 1 (Jan. 1997), 34-73.

11. OpenGL Cube Map Texturing. http://developer.nvidia.com/object/cube_map_ogl_tutorial.html

12. Policarpo, F., Oliveira, M. M., and Comba, J. L. 2005. Real-time relief mapping on arbitrary polygonal surfaces. In Proceedings of the 2005 Symposium on interactive 3D Graphics and Games (Washington, District of Columbia, April 03 -06, 2005). SI3D '05. ACM Press, New York, NY, 155-162.

13. Porter, T. and Duff, T. 1984. Compositing digital images. In Proceedings of the 11th Annual Conference on Computer Graphics and interactive Techniques H. Christiansen, Ed. SIGGRAPH '84. ACM Press, New York, NY, 253-259.

14. Sander, P. V., Snyder, J., Gortler, S. J., and Hoppe, H. 2001. Texture mapping progressive meshes. In Proceedings of the 28th Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '01. ACM Press, New York, NY, 409-416.

15. Tomlinson, B. 2005. From linear to interactive animation: how autonomous characters change the process and product of animating. Comput. Entertain. 3, 1 (Jan. 2005), 5-5.

16. Tumblin, J. and Rushmeier, H. E. Tone reproduction for realistic computer generated images. IEEE Computer Graphics & Applications, 13(6):42--48, November 1993.

17. Williams, L. 1983. Pyramidal parametrics. In Proceedings of the 10th Annual Conference on Computer Graphics and interactive Techniques (Detroit, Michigan, United States, July 25 -29, 1983). P. P. Tanner, Ed. SIGGRAPH '83. ACM Press, New York, NY, 1-11.

2003-2013 AnimationTrip