Normal Mapping in WebGL

While working on a mostly 2D, web-based game called Matt Fantasy, I wanted to use a computer graphics technique called “normal mapping”. For those unfamiliar with it, normal mapping is a process by which programmers can add lighting details into their games/projects by embedding geometric information into 2D textures called “normal maps.” The upshot is that in a 3D game, character models can be flatter, simpler, and easier to run while still appearing to have some depth. For more information on normal mapping, bump mapping, and displacement mapping, check out this article from PluralSight.

In a 2D game like Matt Fantasy, adding normal maps to sprites is a way to add depth and lighting effects to otherwise flat-looking pixel art.

The purpose of this blog post is to show how you might go about adding normal maps to sprites for a 2D game like I did for Matt Fantasy. To that end, I put together this live example that shows the difference between a normal mapped sprite on the left and a non-normal mapped sprite on the right. Check out how the image on the left looks as if it has more “depth” than the image on the right despite both textures being drawn on an equally flat plane. The code for this example is on GitHub here.

To get started with normal mapping in WebGL, you’ll first want to get a handle on how lighting works in WebGL in general. Fortunately, there are some great resources on the web for that, including this article from MDN, and this entire series from WebGL Fundamentals. Personally, I think getting to the point where you can light a plane or a cube at all is a pretty big accomplishment on its own!

Once you’re able to light objects in WebGL in general, the next step is to figure out how to create normal maps from regular textures. For the example in this post, I used a commercial tool called SpriteIlluminator, but NVIDIA has a free tool you can use as well. After you’ve created your normal map, you’ll have two texture files: your original texture (e.g. texture.png) and your normal map (e.g. texture-n.png).

For my demo project, here is the texture image I’m using from OpenGameArt:

Brick texture

And here’s the corresponding normal map created in SpriteIlluminator:

Brick normal map texture

To get normal maps to work, the main requirement is to pull information about the normal map and into the lighting calculations in the fragment shader. At first, this seems like it may be as straightforward as extracting the normal data via a texture2D sampler the way you would extract color data from a regular texture. While this is indeed part of the process, you must first do a coordinate change of basis to convert the normals in the normal map’s “tangent space” to world space that (at least in this example) the lighting is in. This way, when the surface rotates, the normals and subsequent lighting calculations will remain correct.

For a 3D game with complex models, my understanding is that the math for creating the TBN (tangent/bitangent/normal) matrix that performs the change of basis is handled by the modeling program that generated the asset (e.g. Blender) or the graphics API you’re using. But since for this example our texture will only ever be on a simple plane since we’re only lighting 2D sprites, creating TBN matrices by hand is somewhat easier.

Even better, the good folks at Learn OpenGL have already done the math for us! I relied entirely on the TBN calculation in their normal mapping tutorial, translating it to TypeScript pretty literally. After reading the description a few thousand times, I can’t claim I fully understand how it works, but… well, it looks right to me… In this example, I encapsulated this code in the generateTbn function, which I pasted below just to get some code in this post.

// https://learnopengl.com/Advanced-Lighting/Normal-Mapping
function generateTbn() {
  const pos1 = QUAD_POSITIONS.slice(0, 3) as vec3;
  const pos2 = QUAD_POSITIONS.slice(3, 6) as vec3;
  const pos3 = QUAD_POSITIONS.slice(6, 9) as vec3;

  const edge1 = vec3.create();
  const edge2 = vec3.create();

  vec3.subtract(edge1, pos2, pos1);
  vec3.subtract(edge2, pos3, pos1);

  const uv1 = textureCoords.slice(0, 2) as vec2;
  const uv2 = textureCoords.slice(2, 4) as vec2;
  const uv3 = textureCoords.slice(4, 6) as vec2;

  const deltaUv1 = vec2.create();
  const deltaUv2 = vec2.create();

  vec2.subtract(deltaUv1, uv2, uv1);
  vec2.subtract(deltaUv2, uv3, uv1);

  const f = 1 / (deltaUv1[0] * deltaUv2[1] - deltaUv2[0] * deltaUv1[1]);

  const tangent = vec3.create();
  vec3.set(
    tangent,
    deltaUv2[1] * edge1[0] - deltaUv1[1] * edge2[0],
    deltaUv2[1] * edge1[1] - deltaUv1[1] * edge2[1],
    deltaUv2[1] * edge1[2] - deltaUv1[1] * edge2[2]
  );

  vec3.scale(tangent, tangent, f);
  vec3.normalize(tangent, tangent);

  const bitangent = vec3.create();

  vec3.set(
    bitangent,
    -deltaUv2[0] * edge1[0] + deltaUv1[0] * edge2[0],
    -deltaUv2[0] * edge1[1] + deltaUv1[0] * edge2[1],
    -deltaUv2[0] * edge1[2] + deltaUv1[0] * edge2[2]
  );

  vec3.scale(bitangent, bitangent, f);
  vec3.normalize(bitangent, bitangent);

  const normal = vec3.create();
  vec3.set(normal, 0, 0, 1);

  return { normal, tangent, bitangent };
}

As in the Learn OpenGL example, the the TBN is multiplied by the model matrix in the vertex shader to account for any transformations and then passed to the fragment shader to perform the lighting calculations.

I’ll post the vertex and fragment shaders below, and they’re on GitHub as well:

Vertex shader:

precision highp float;

attribute vec3 aVertexPosition;
attribute vec3 aVertexTangent;
attribute vec3 aVertexBitangent;
attribute vec3 aVertexNormal;
attribute vec2 aTextureCoord;

uniform vec3 uLightPos;
uniform mat4 uModelMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;

varying vec2 vTextureCoord;
varying vec3 vVertexNormal;
varying mat3 vTbn;
varying vec4 vWorldPos;


void main(void) {
  vec4 worldPos = uModelMatrix * vec4(aVertexPosition, 1.0);
  gl_Position = uProjectionMatrix * uViewMatrix * worldPos;

  vec3 t = normalize(vec3(uModelMatrix * vec4(aVertexTangent, 0.0)));
  vec3 b = normalize(vec3(uModelMatrix * vec4(aVertexBitangent, 0.0)));
  vec3 n = normalize(vec3(uModelMatrix * vec4(aVertexNormal, 0.0)));
  mat3 tbn = mat3(t, b, n);

  vTextureCoord = aTextureCoord;
  vVertexNormal = aVertexNormal;
  vTbn = tbn;
  vWorldPos = worldPos;
}

Fragment shader:

precision highp float;

varying mat3 vTbn;
varying vec2 vTextureCoord;
varying vec3 vVertexNormal;
varying vec4 vWorldPos;

uniform sampler2D uSpriteSampler;
uniform sampler2D uNormalSampler;
uniform vec3 uLightPos;
uniform bool uNormalMapOn;

void main(void) {
  vec3 lightColor = vec3(1.0, 1.0, 1.0);
  vec4 color = texture2D(uSpriteSampler, vTextureCoord);
  vec3 normalMap = texture2D(uNormalSampler, vTextureCoord.st).xyz * 2.0 - 1.0;
  normalMap = normalize(vTbn * normalMap);

  float light = dot(normalize(normalMap), normalize(uLightPos));

  // http://learnwebgl.brown37.net/09_lights/lights_attenuation.html
  float d = length(uLightPos - vWorldPos.xyz);
  float attenuation = clamp(.5 / d + .5 / pow(d, 20.0), 0.0, 1.0);
  light = attenuation * light;

  gl_FragColor = color;

  if (uNormalMapOn) {
    gl_FragColor.rgb += light * lightColor;
  } else {
    gl_FragColor.rgb += .5 * attenuation * lightColor;
  }
}

Finally, you can extract the normal data from the normal map, and use that data to perform your lighting calculations. Voila! A flat plane looks as if it has some depth!

I haven’t run this code through all its paces, so there’s a good chance I messed something up. Visually, it looks about right to me though. Please take a look at the demo, the code, and let me know if you have any thoughts!