Geometry pipelines

Source: Wikipedia, the free encyclopedia.

Geometric manipulation of modelling primitives, such as that performed by a geometry pipeline, is the first stage in

very-large-scale integration (VLSI) in the early 1980s. A device called the Geometry Engine developed by Jim Clark and Marc Hannah at Stanford University in about 1981 was the watershed for what has since become an increasingly commoditized function in contemporary image-synthetic raster display systems.[1][2]

surface normals, and then to perform the lighting and shading
computations used in their subsequent rendering.

History

Hardware implementations of the geometry pipeline were introduced in the early

viewing transformations with all the lighting and shading handled by a separate hardware implementation stage. In later, much higher performance applications, such as the RealityEngine
, they began to be applied to perform part of the rendering support as well.

More recently, perhaps dating from the late 1990s, the hardware support required to perform the manipulation and rendering of quite complex scenes has become accessible to the consumer market. Companies such as

3Dfx, Matrox and others relied on the CPU
for geometry processing.

This subject matter is part of the technical foundation for modern computer graphics, and is a comprehensive topic taught at both the undergraduate and graduate levels as part of a computer science education.

See also

References