Colors, Illumination, Shading
Light intensity
The color of a ray of light depends on its wavelength. The color of a beam
of light depends on its wavelength composition. An average human can see
electromagnetic waves with wavelengths from 400 nanometers to 700 nm. Light
with 400nm long waves appears violet and light with 700nm long waves appears
red. The general perception of color depends on wavelength distribution of
the luminous energy. If all wavelengths are present, we see white.
Color Models
Recall the color of light depends on its wavelength distribution -- also called
it's spectrum. The visual effect of a spectral distribution of light can be
described using:
- Dominant wavelength
- Each distribution is equivalent to another with
a single wavelength of energy e2 while all other wavelengths occur
at energy e1, e2 >= e1
- purity of spectrum
- proportional to e2 - e1
- luminance
-
the total energy in the spectrum -- area under the spectral curve.
It is common to describe colored light using three parameters, which are more
intuitive than the three functions described above. Red, Green and Blue intensities
are the most popular. RGB scheme matches well with the tri-stimulus color perception
theory. This theory says that there exist three types of color sensors (called
cones) on human retina. The three types of cones sensors respond primarily to red,
green and blue colors, respectively. Unfortunately, the RGB scheme requires negative
intensities to describe some colors. A better triple of parameters are a part of
the CIE standard: X, Y, and Z.
Any light, C, can be described as (X, Y, Z),
where C = XX + YY + ZZ.
We often use a normalized description
(x,y,z) where x = X/(X+Y+Z), y = Y/(X+Y+Z), z = Z/(X+Y+Z).
Note that x + y + z = 1,
hence we do not need all three. However just x, y and z cannot be used to recover
X, Y and Z. In practice, we represent light as (x, y, Y).
There exists a transformation from RGB color space to XYZ color space. XYZ was
designed to be standard across all platforms. RGB, which is also typically normalized
so that each of R, G and B intensity values lie in the range [0,1], is CRT
dependent: the same triple of red, green and blue intensities can generate
different colors on different monitors.
Other color models are used by many applications. Cyan, Magenta and Yellow (CMY)
is a popular model for printing devices, where C = 1-R, M = 1-G and Y = 1-B. This
model is subtractive model, i.e. if we deposit a Cyan pigment of a paper, it
absorbs red and reflects green and blue components. If we deposit Cyan and Magenta,
only blue component is reflected. If we deposit all three no component is reflected,
and we see black.
For painting like application the Hue, Saturation, Value (HSV) model is popular.
The HSV space can be visualized by looking at the RGB cube at the point (1,1,1)
along the direction (-1, -1, -1), i.e., down towards the corner (0,0,0). A
hexagon is visible. The color vector in HSV space can be described by
its projected direction on the hexagon (H), the length of the projection (S)
and the height of the vector (V).
Illumination/Lighting models
An illumination model captures the behavior of light in an environment of
light sources and geometric objects. In practice, we use a combination of more
than one models. The simplest model is ambient light -- an all pervasive light.
The intensity Ia of ambient light remains constant throughout the
environment. Each object is assigned material-properties like coefficient of
reflectivity. The ambient color of a point on an object with ambient reflection
coefficient equal to ka is intensity of light reflected from it, which
is kaIa. (All equations in this section are described in
terms of a single intensity -- for colored lights, we perform the same operation
on each of he component intensities, e.g., Red, Green and Blue.)
Diffuse reflectance model (also called Lambertian reflectance) models matte
surfaces. Light falling on a lambertian surface reflects equally in all directions.
Diffused light, Id
can be directional or may be emitted from a point source. In either
case, it has a direction at each point on a surface. The amount of light falling
on a surface depends on its orientation with respect to the direction of light.
A piece of surface perpendicular to light gets more light than another piece of
the same area, which is oblique to the light. The amount of light falling on a
small area dA is given by Idcos(ø), where ø is the angle
between N, the normal to dA, and L, the direction to light from dA. The intensity
of reflected light is kd Idcos(ø) =
kd Id N.L, where kd is the diffuse reflection
coefficient of dA. (We assume all unit vectors here.) Note that the color of
any point depends on the direction of light but is independent of the position of
the view-point.
A third model is used for shiny surfaces: the specular reflectance model.
Shiny surfaces like mirrors reflect light in a very specific direction, R,
so that the angle of incidence of light is the same as the angle of reflection.
R = 2N (N.L) - L. Phong's illumination model assumes that the amount of light
is maximum in the direction R and falls off rapidly away from it. According to
this model the intensity seen by the viewer is proportional to cos(ß),
where ß is the angle between R and V, V being the vector in the direction
of the viewer. Note that a single ray of light would be reflected off a perfect
mirror exactly in the direction R. However, if the reflecting surface is not
perfectly smooth, the reflected ray will lie in a direction close to R.
Statistically, the rays in a beam of light reflected off a small area dA
would be distributed about R with most rays along R. The intensity of the reflected
beam would fall rapidly away from R. Shinier the surface, faster the fall-off.
According to Phong's model, the intensity of reflected light seen by the viewer
is: ks Is(V.R)n, where n is an arbitrary parameter
describing the shininess of the material, ks
is the specular coefficient of
reflectivity and Is is the intensity of incident light.
Note that ks is, in fact a
function of ø as surfaces appear more shiny at grazing angles. We sometimes
ignore that effect and use a constant ks for a material type.
We can combine the three models to get a more comprehensive lighting function:
ka Ia +
kd Id N.L +
ks Is(V.R)n
Sometimes, we modify Phong's model for efficiency as follows. Consider H, the vector
midway between L and V (i.e. in the direction (L+V)/2). If The normal N to the
surface is in the same direction as H, V lies in the same direction as R, thus
getting maximum light. As N moves away from H, V moves away from R. Note, however,
that the angles between N and H is not always the same as that between V and R.
But using (N.H)n instead of (V.R)n achieves similar affect
and is less expensive as H remains almost constant throughout a surface for a
far away light source and a far away viewer.
To simulate spot light sources, we employ the same technique as Phong's. (This
model is due to Warn). The intensity of the spot light source is deemed to fall
away as coss(o), where o is the angle that the illuminated
point makes with the light's (primary) direction, and s is the spread of the spot
light.
The intensity of light falls off as it travels -- the intensity is inversely
proportional to d, the distance the light travels. We can capture that effect by
incorporating in our formula
distance between the light source and the object and that between
the object and the view-point. An attenuation factor of
f(d) = min(1, 1/(ad2+bd+c) )
for arbitrary constants a, b and c works well in practice:
ka Ia +
f(d)[kd Id N.L +
ks Is(V.R)n]
For multiple light sources, we just add up the intensity contribution of each light:
ka Ia + \sum [
f(d)i
{kd Idi N.L +
ks Isi(V.R)n} ]
Shading / Interpolation
An accurate method of computing the color at each pixel would be to apply the
illumination model to each point on a surface. Of course, there are an infinite
number of such points, but we can compute the color of each point (or a point
on the small area) that projects on a pixel. This means, at rasterization time
we should have not only the camera space position of points on the surface but
also the normal to the surface at that point. This normal may be different for
different points on a polygon, if the polygon is an approximation for a part of
a smooth surface. A less expensive option, is to evaluate the color at a few
points on the surface and perform interpolation to find the color at each
intermediate point. Two interpolation schemes (also called shading models)
are popular -- the Gouraud shading model and the Phong shading model (not to
be confused with the Phong illumination model).
In Gouraud's scheme we compute the color of each vertex of a polygon using
an illumination model. At rasterization time, these color (i.e. intensity)
values are linearly interpolated at each pixel position. The interpolation
between two colors c1 and c2 is done by (1-t)c1 + tc2, for
values of t varying from 0 to 1. We can compute the desired value of
t by using the known interpolants, e.g. the x or y pixel coordinates.
In particular: I4 at y=y4 along the left edge (of a
span) from point (x2,y2) with intensity I2
to point (x1,y1) with intensity I1 is:
I1 (y4 - y2)/(y1 - y2) +
I2 (y1 - y4)/(y1 - y2)
Interpolation along a span, can be done similarly, this time using x values
instead of y.
Phong's shading scheme interpolates normal values instead of color. The interpolation
of normal is similar to interpolation of color, we just need to apply the
interpolation to each coordinate of the normal vectors. Once we have a
normal, we can apply an illumination model at each pixel. Note that the interpolated
normal need not have a unit length. Re-normalization
is necessary to use the formulae described above.
While Phong's scheme is expensive, it generates highlights more accurately. If
the highlight does not fall on a vertex, Gouraud shading can miss it completely.
Even if the highlight falls on a vertex, it has a more smeared effect. It gets worse:
for an animation using a sequence of (moving) view-points, highlights can appear
and disappear.
However, both techniques are less accurate than per-pixel application of the
illumination model. Interpolation suffers from perspective foreshortening and
dependence on orientation of vertices on screen. Furthermore, existence of
polygons that are adjacent to more than one (smaller) polygons create a T-joint
of edges -- resulting in shading discontinuities.