Shader
![]() | This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
|
In
Traditional shaders calculate
Shaders are used widely in
History
This use of the term "shader" was introduced to the public by
As
Design
Shaders are simple programs that describe the traits of either a vertex or a pixel. Vertex shaders describe the attributes (position, texture coordinates, colors, etc.) of a vertex, while pixel shaders describe the traits (color, z-depth and alpha value) of a pixel. A vertex shader is called for each vertex in a primitive (possibly after tessellation); thus one vertex in, one (updated) vertex out. Each vertex is then rendered as a series of pixels onto a surface (block of memory) that will eventually be sent to the screen.
Shaders replace a section of the graphics hardware typically called the Fixed Function Pipeline (FFP), so-called because it performs lighting and texture mapping in a hard-coded manner. Shaders provide a programmable alternative to this hard-coded approach.[4]
The basic graphics pipeline is as follows:
- The CPU sends instructions (compiled shading language programs) and geometry data to the graphics processing unit, located on the graphics card.
- Within the vertex shader, the geometry is transformed.
- If a geometry shader is in the graphics processing unit and active, some changes of the geometries in the scene are performed.
- If a tessellation shader is in the graphics processing unit and active, the geometries in the scene can be subdivided.
- The calculated geometry is triangulated (subdivided into triangles).
- Triangles are broken down into fragment quads (one fragment quad is a 2 × 2 fragment primitive).
- Fragment quads are modified according to the fragment shader.
- The depth test is performed; fragments that pass will get written to the screen and might get blended into the frame buffer.
The graphic pipeline uses these steps in order to transform three-dimensional (or two-dimensional) data into useful two-dimensional data for displaying. In general, this is a large pixel matrix or "
Types
There are three types of shaders in common use (pixel, vertex, and geometry shaders), with several more recently added. While older graphics cards utilize separate processing units for each shader type, newer cards feature
2D shaders
2D shaders act on
Pixel shaders
Pixel shaders, also known as
3D shaders
3D shaders act on
Vertex shaders
Vertex shaders are the most established and common kind of 3D shader and are run once for each
Geometry shaders
Geometry shaders were introduced in Direct3D 10 and OpenGL 3.2; formerly available in OpenGL 2.0+ with the use of extensions.
Geometry shader programs are executed after vertex shaders. They take as input a whole primitive, possibly with adjacency information. For example, when operating on triangles, the three vertices are the geometry shader's input. The shader can then emit zero or more primitives, which are rasterized and their fragments ultimately passed to a
Typical uses of a geometry shader include point sprite generation, geometry
Tessellation shaders
As of OpenGL 4.0 and Direct3D 11, a new shader class called a tessellation shader has been added. It adds two new shader stages to the traditional model: tessellation control shaders (also known as hull shaders) and tessellation evaluation shaders (also known as Domain Shaders), which together allow for simpler meshes to be subdivided into finer meshes at run-time according to a mathematical function. The function can be related to a variety of variables, most notably the distance from the viewing camera to allow active level-of-detail scaling. This allows objects close to the camera to have fine detail, while further away ones can have more coarse meshes, yet seem comparable in quality. It also can drastically reduce required mesh bandwidth by allowing meshes to be refined once inside the shader units instead of downsampling very complex ones from memory. Some algorithms can upsample any arbitrary mesh, while others allow for "hinting" in meshes to dictate the most characteristic vertices and edges.
Primitive and Mesh shaders
Circa 2017, the
Nvidia introduced mesh and task shaders with its Turing microarchitecture in 2018 which are also modelled after compute shaders.[11][12] Nvidia Turing is the world's first GPU microarchitecture that supports mesh shading through DirectX 12 Ultimate API, several months before Ampere RTX 30 series was released.[13]
In 2020, AMD and Nvidia released RDNA 2 and Ampere microarchitectures which both support mesh shading through DirectX 12 Ultimate.[14] These mesh shaders allow the GPU to handle more complex algorithms, offloading more work from the CPU to the GPU, and in algorithm intense rendering, increasing the frame rate of or number of triangles in a scene by an order of magnitude.[15] Intel announced that Intel Arc Alchemist GPUs shipping in Q1 2022 will support mesh shaders.[16]
Ray tracing shaders
.Compute shaders
Parallel processing
Shaders are written to apply transformations to a large set of elements at a time, for example, to each pixel in an area of the screen, or for every vertex of a model. This is well suited to parallel processing, and most modern GPUs have multiple shader pipelines to facilitate this, vastly improving computation throughput.
A programming model with shaders is similar to a
Programming
The language in which shaders are programmed depends on the target environment. The official OpenGL and
GUI shader editors
Modern video game development platforms such as Unity, Unreal Engine and Godot increasingly include node-based editors that can create shaders without the need for actual code; the user is instead presented with a directed graph of connected nodes that allow users to direct various textures, maps, and mathematical functions into output values like the diffuse color, the specular color and intensity, roughness/metalness, height, normal, and so on. Automatic compilation then turns the graph into an actual, compiled shader.
See also
- GLSL
- SPIR-V
- HLSL
- Compute kernel
- Shading language
- GPGPU
- List of common shading algorithms
- Vector processor
References
- ^ "LearnOpenGL - Shaders". learnopengl.com. Retrieved November 12, 2019.
- ^ "The RenderMan Interface Specification".
- ^ Lillypublished, Paul (May 19, 2009). "From Voodoo to GeForce: The Awesome History of 3D Graphics". PC Gamer – via www.pcgamer.com.
- ^ "ShaderWorks' update - DirectX Blog". August 13, 2003.
- ^ "GLSL Tutorial – Fragment Shader". June 9, 2011.
- ^ "GLSL Tutorial – Vertex Shader". June 9, 2011.
- ^ Geometry Shader - OpenGL. Retrieved on December 21, 2011.
- ^ "Pipeline Stages (Direct3D 10) (Windows)". msdn.microsoft.com. January 6, 2021.
- ^ "Radeon RX Vega Revealed: AMD promises 4K gaming performance for $499 - Trusted Reviews". July 31, 2017.
- ^ "The curtain comes up on AMD's Vega architecture". January 5, 2017.
- ^ "NVIDIA Turing Architecture In-Depth". September 14, 2018.
- ^ "Introduction to Turing Mesh Shaders". September 17, 2018.
- ^ "DirectX 12 Ultimate Game Ready Driver Released; Also Includes Support for 9 New G-SYNC Compatible Gaming Monitors".
- ^ "Announcing DirectX 12 Ultimate". DirectX Developer Blog. March 19, 2020. Retrieved May 25, 2021.
- ^ "Realistic Lighting in Justice with Mesh Shading". NVIDIA Developer Blog. May 21, 2021. Retrieved May 25, 2021.
- ^ Smith, Ryan. "Intel Architecture Day 2021: A Sneak Peek At The Xe-HPG GPU Architecture". www.anandtech.com.
- ^ "Vulkan Ray Tracing Final Specification Release". Blog. Khronos Group. November 23, 2020. Retrieved 2021-02-22.
Further reading
- ISBN 0-201-50868-0.
- ISBN 0-12-228730-4.
- ISBN 0-321-19496-9.
- ISBN 0-321-19789-5.
External links
- OpenGL geometry shader extension
- Riemer's DirectX & HLSL Tutorial: HLSL Tutorial using DirectX with much sample code
- Pipeline Stages (Direct3D 10)