The Blinn–Phong reflection model, also called the modified Phong reflection model, is a modification developed by Jim Blinn to the Phong reflection model.[1]
Blinn–Phong is a shading model used in OpenGL and Direct3D's fixed-function pipeline (before Direct3D 10 and OpenGL 3.1), and is carried out on each vertex as it passes down the graphics pipeline; pixel values between vertices are interpolated by Gouraud shading by default, rather than the more computationally-expensive Phong shading.[2]
R ⋅ V
If, instead, one calculates a halfway vector between the viewer and light-source vectors,
H=
L+V | |
\left\|L+V\right\| |
R ⋅ V
N ⋅ H
N
L
V
H
V=PH(-L),
PH
H.
This dot product represents the cosine of an angle that is half of the angle represented by Phong's dot product if V, L, N and R all lie in the same plane. This relation between the angles remains approximately true when the vectors don't lie in the same plane, especially when the angles are small. The angle between N and H is therefore sometimes called the halfway angle.
Considering that the angle between the halfway vector and the surface normal is likely to be smaller than the angle between R and V used in Phong's model (unless the surface is viewed from a very steep angle for which it is likely to be larger), and since Phong is using
\left(R ⋅ V\right)\alpha,
\alpha\prime>\alpha
\left(N ⋅ H
\alpha\prime | |
\right) |
For front-lit surfaces (specular reflections on surfaces facing the viewer),
\alpha\prime=4\alpha
Additionally, while it can be seen as an approximation to the Phong model, it produces more accurate models of empirically determined bidirectional reflectance distribution functions than Phong for many types of surfaces.[3]
Blinn-Phong will be faster than Phong in the case where the viewer and light are treated to be very remote, such as approaching or at infinity. This is the case for directional lights and orthographic/isometric cameras. In this case, the halfway vector is independent of position and surface curvature simply because the halfway vector is dependent on the direction to viewer's position and the direction to the light's position, which individually converge at this remote distance, hence the halfway vector can be thought of as constant in this case.
H
This sample in High-Level Shading Language is a method of determining the diffuse and specular light from a point light. The light structure, position in space of the surface, view direction vector and the normal of the surface are passed through. A Lighting structure is returned;
The below also needs to clamp certain dot products to zero in the case of negative answers. Without that, light heading away from the camera is treated the same way as light heading towards it. For the specular calculation, an incorrect "halo" of light glancing off the edges of an object and away from the camera might appear as bright as the light directly being reflected towards the camera.
struct Lighting
struct PointLight
Lighting GetPointLight(PointLight light, float3 pos3D, float3 viewDir, float3 normal)
This sample in the OpenGL Shading Language consists of two code files, or shaders. The first one is a so-called vertex shader and implements Phong shading, which is used to interpolate the surface normal between vertices. The second shader is a so-called fragment shader and implements the Blinn–Phong shading model in order to determine the diffuse and specular light from a point light source.
This vertex shader implements Phong shading:
uniform mat4 projection, modelview, normalMat;
varying vec3 normalInterp;varying vec3 vertPos;
void main
This fragment shader implements the Blinn–Phong shading model[4] and gamma correction:
in vec3 normalInterp;in vec3 vertPos;
uniform int mode;
const vec3 lightPos = vec3(1.0, 1.0, 1.0);const vec3 lightColor = vec3(1.0, 1.0, 1.0);const float lightPower = 40.0;const vec3 ambientColor = vec3(0.1, 0.0, 0.0);const vec3 diffuseColor = vec3(0.5, 0.0, 0.0);const vec3 specColor = vec3(1.0, 1.0, 1.0);const float shininess = 16.0;const float screenGamma = 2.2; // Assume the monitor is calibrated to the sRGB color space
void main
The colors, and are not supposed to be gamma corrected. If they are colors obtained from gamma-corrected image files (JPEG, PNG, etc.), they need to be linearized before working with them, which is done by scaling the channel values to the range and raising them to the gamma value of the image, which for images in the sRGB color space can be assumed to be about 2.2 (even though for this specific color space, a simple power relation is just an approximation of the actual transformation). Modern graphics APIs have the ability to perform this gamma correction automatically when sampling from a texture or writing to a framebuffer.[5]