Developed by Yoshihiro Mizutani and Kurt Reindel
Environment mapping is a technique that simulates the results of ray-tracing. Because environment mapping is performed using texture mapping hardware, it can obtain global reflection and lighting results in real-time.
Environment mapping is essentially the process of pre-computing a texture map and then sampling texels from this texture during the rendering of a model. The texture map is a projection of 3D space to 2D space. There is an infinite number of ways to project a 3D surface to a 2D surface, but we shall limit our discussion to the following three methods.
The intended use of environment mapping is to simulate reflections or lighting upon objects without going through expensive ray-tracing or lighting calculations. We accomplish this objective by generating scenes using a two-pass approach.
After the six images are loaded into texture memory they can be either be sampled to generate an environment map or sampled directly to texture map a model. The process of creating an environment map can be imagined as projecting the six sides of a cube onto a sphere, and then flattening the sphere into a 2D map. See Figure 1.
Fig. 1 Mapping cube onto a sphere
When applying an environment map to a model, texture coordinates are needed at each vertex. The UV coordinates must be calculated using the same 3D to 2D mapping used to generate the environment map. Geometric position of the model vertexes and the normal directions at the vertexes are used to compute a reflection vector. The view vector is usually from the origin of the eye coordinate system to a vertex of the model. If V is the view vector, and N the normal at the vertex in the eye coordinate system, then the reflection vector at the vertex R is:
Fig. 2 Eq. 1
All environment mapping techniques have their strengths and weaknesses. Due to differences in the way the maps are generated, the quality of the images generated using the maps varies significantly. The following is a brief comparison:
Eq. 2 OpenGL Spherical 3D to 2D projection
where R is the reflection vector in eye coordinates. This formula means that the entire surface of the sphere is mapped within the inscribed circle of 1.0x1.0 square. Figure 3 shows the cross section of a sphere on which the environment seen is mapped. In this example, the eye point is placed at the right hand side of the figure. The white points on the perimeter of the circle are mapped to the green points in the 2D environment map. Each point on the sphere's cross section is connected to a point on the plane by a white line. Figure 4 shows an orthogonal view of the texture plane referred to in Figure 3. The inside circle represents the front half of sphere, which is the right half of Figure 3. The outside circle represents the back half of the sphere, which is the left half of Figure 3.
Fig. 3 Fig. 4
An example Reflective GL Spherical Map, GL Specular and Reflective textured teapot
An example diffuse lit teapot, diffusly lit teapot textured with specular reflective GL maps
Here the sphere is mapped to a single latitude-longitude texture map. The map's U coordinate represents longitude (from 0 to 360 degrees) and V coordinate represents latitude (from -90 to 90 degrees). The surface of the sphere is mapped from 3D to 2D with the following formula:
Fig. 5 Eq. 3
Pros of Latitude Mapping
An example Reflective Latitude Map, teapot textured with reflective and specular Latitude maps
An example diffuse lit teapot, Latitude Specular Map, and diffusely lit teapot textured with reflective and specular Latitude maps
Click here for a Latitude Map animation
Cube Mapping is the technique of rendering a model from samples taken directly from the six source textures (no intermediate environment map is created or sampled). An imaginary cube envelopes the model, each face textured with one of the source images. Each face of the imaginary cube is represented by an infinite plane with the plane normal passing through at UV coordinates (0.5,0.5). UV coordinates are computed by a constant multiplied by the dot product of the reflection vector and the vector normal to each face. UV coordinates collinear with the face normal get the value (0.5, 0.5).
This can be implemented using OpenGL, with the wrapping mode set to GL_CLAMP. UV coordinates which extend off the (0,0) to (1,1) range cause the triangle to pick up the GL_TEXTURE_BORDER_COLOR (0,0,0,0). The triangles are textured using up to three face textures and blended into the frame buffer with blend function source set to GL_SRC_ALPHA, and destination set to GL_ONE.
Pros of Cube Mapping
Cons of Cube Mapping
Six textures displayed as an unfolded cube, Cube Mapped teapot
Click here for a Cubed Mapped teapot animation