A visual, interactive guide to 3D Look-Up Tables.
From first principles to HLSL shaders — learn by playing.
Every pixel on your screen is described by three numbers: how much red, green, and blue light to emit. Each ranges from 0 to 255. That gives us roughly 16.7 million possible colors.
But here's the leap: if a color is three numbers, it's a coordinate. Red is the X-axis. Green is Y. Blue is Z. Every color is a point in a three-dimensional cube.
The black corner — (0, 0, 0) — is where all lights are off.
The white corner — (255, 255, 255) — is full blast.
The glowing marker is the color you picked. Drag it around. You're not choosing a color from a list — you're navigating a space.
What if you want to change the feel of an image? Make it warmer, contrastier, moodier? The most basic tool is a 1D Look-Up Table: three independent curves, one per channel.
Drag the control points on the curves below. The X-axis is the input value (what the pixel had), and the Y-axis is the output (what it becomes). A straight diagonal line means "no change." Bend it, and you're remapping color.
This is exactly what curves in Photoshop, Lightroom, or DaVinci Resolve do. Under the hood, each curve is just a 256-entry array — for each possible input value (0–255), it stores an output value. That's a 1D LUT: one dimension, one lookup per channel.
Try to do this: make the dark parts of the image warm (orange), and the bright parts cool (blue). You'll find you can't — not precisely. To push warmth into the shadows you'd lift the red curve in the low end, but that affects all dark pixels, including ones that should stay neutral.
The problem? Each channel is remapped in isolation. The red curve can't ask "what's the blue value of this pixel?" Channels can't communicate. To make decisions based on the full color — on where a point sits in the cube — we need all three dimensions at once.
A 3D LUT doesn't remap channels independently. It takes the entire color — the full (R, G, B) coordinate — and maps it to a new color. Every point in the color cube can move to any other point. It's a total deformation of color space.
On the left is the identity LUT: input equals output. It's a perfect, regular grid — no transformation. On the right is a transformed LUT — same points, but shifted to new positions. Toggle the presets and watch the cube warp.
The visual metaphor is everything here: applying a 3D LUT is like grabbing the color cube and squishing, stretching, and twisting it. Warm film tones pull blues toward amber. Cross-processing twists the cube into wild new shapes. An inversion flips it inside out.
A full 3D LUT mapping every possible 8-bit color would need 256×256×256×3 bytes = 50 MB. That's impractical. So instead, we sample a sparse grid — typically 17×17×17, 33×33×33, or 65×65×65 points — and interpolate for everything in between. Drag the grid size slider above and watch the lattice change. Even a crude 8³ grid captures the broad shape of the transform.
Most input colors won't land exactly on a grid point. They'll fall between points — inside one of the little cells of the lattice. So we need to interpolate: blend the 8 corner colors based on how close the input is to each one.
This process is called trilinear interpolation — "tri" because it's three linear interpolations in sequence. Let's walk through it step by step.
Our point sits inside a cell defined by 8 known colors at the corners. Each corner's color was stored in the LUT.
Pair up corners that differ only in X. Blend each pair using the fractional X position. 8 corners → 4 points.
mix(c000, c100, frac.x)Take the 4 results and pair them along Y. Blend each pair. 4 → 2.
mix(c_x0, c_x1, frac.y)Final pair, blended along Z. 2 → 1. This is our output color.
mix(c_xy0, c_xy1, frac.z)That's the entire algorithm. Three rounds of linear blending. On a GPU, this is even simpler — the hardware's texture unit does trilinear interpolation for free when you sample a 3D texture. The entire 3D LUT lookup becomes a single texture fetch:
This is a subtlety that trips people up. GPU texels have their color at the center, not
the edge. If your LUT has 33 texels along one axis, the first texel center is at 0.5/33,
and the last is at 32.5/33. Without the scale and offset, input 0.0 would sample
halfway between the first texel and nothing — reading garbage from the edge. The formula maps the
0–1 range to exactly hit the first and last texel centers.
Time to see the whole pipeline in action. On the left is an image being processed through a 3D LUT. On the right is the color cube showing the transformation. Choose a preset or tweak the parameters to create your own look.
After all this theory, what does a 3D LUT actually look like on disk? The most common format is
Adobe's .cube file — and it's remarkably simple. It's just a text file:
That's it. A header declaring the grid size, then rows of RGB float triplets — one for each lattice point. The points are listed in R-fastest order: R increments first, then G, then B. For a 33³ LUT, that's 35,937 lines of data — typically around 1 MB as text.
The entire pipeline is now clear: a colorist crafts a look by grading an image or
adjusting color volumes. Software records the before-and-after mapping at every lattice point
and writes out a .cube file. That file can then be loaded by any application —
DaVinci Resolve, Premiere, a game engine, a hardware monitor — and applied in real-time
as a 3D texture lookup. One file. Universal color transform.
Built as an interactive explainer. All demos run live in your browser using Three.js and Canvas 2D.
Inspired by Nicky Case, 3Blue1Brown, and Freya Holmér.