Until about five years ago, hardware accelerated, interactive computer graphics and procedural shading were considered opposite ends of the speed/ realism spectrum. Rendering hardware uses parallel computational resources to render tens of millions of triangles per second and write billions of pixels per second. Its uses include computer-aided design, flight simulators, games, medical and scientific visualization, and virtual reality. Procedural shading allows a programmer or artist to describe the color and shading of a surface through a function in a special-purpose high-level “shading language”. Its uses in software renderers have included commercials and film. Since procedural shaders are literally short programs, they are known for their detail and complexity and their ability to easily produce an infinite array of similar, yet subtly differing surfaces.
Recent advances in graphics hardware have brought us to a point where simple procedural shaders can be rendered interactively (though we still have yet to reach the often-stated goal of “interactive Toy Story”). This talk describes two interactive shading language systems. The first is PixelFlow, a large-scale graphics hardware project that was completed in 1997 at the University of North Carolina at Chapel Hill. PixelFlow was the first graphics hardware system with a high-level shading language, and compiled shading code into operations for a custom SIMD processor array. The second is OpenGL Shader, a software product first released in 2000 by SGI. OpenGL Shader compiles shading language code into multiple rendering passes on existing graphics hardware, using either standard blending operations or recent low-level hardware shading extensions to perform the basic operations of the shading code. The talk will include examples of the shading code and results of both systems as well as details of the graphics hardware architectures and compilers behind them.