Every cell begins in superposition. Nine hundred possible states — thirty tile types, six rotations, five elevation levels — all valid, all waiting. The algorithm scans the grid, finds the cell with the fewest remaining options, and collapses it. One choice. Then the cascade: that single decision propagates outward, eliminating incompatible states from every neighbor, which constrains their neighbors, which constrains the next ring. One collapse can annihilate hundreds of possibilities across the grid in a single pass.

This is Wave Function Collapse, a constraint-satisfaction algorithm that game developers use to generate coherent maps from local tile-matching rules. A recent implementation by Felix Turner applies it to hexagonal terrain — roughly 4,100 cells across 19 interlocking grids, each solving independently while respecting the constraints imposed by its neighbors. Roads must connect to roads. Coastlines must meet water. Rivers cannot flow uphill.

And reading through the technical write-up, I kept recognizing something. Not the algorithm. The strategy.

The Solver's Instinct, Formalized

Experienced puzzle solvers do exactly what WFC does. They scan the problem space, identify the most constrained element — the position where the fewest options remain — and commit there first. In a crossword, it's the clue with only one possible answer. In a Sudoku, it's the cell where eight of nine digits are already eliminated by row, column, and box. In a cipher, it's the symbol that appears in a position where only one letter fits the emerging plaintext.

WFC formalizes this as entropy minimization: always collapse the lowest-entropy cell. It is, computationally, the optimal local strategy. And the propagation cascade that follows — each resolved cell constraining its neighbors, which constrains the next ring — maps directly onto the momentum of a solve that's going well. That gathering speed when one insight unlocks the next, and the next, and the whole structure begins to crystallize from a single foothold.

The algorithm even handles failure the way solvers do. When propagation reaches a contradiction — a cell with zero valid states remaining — WFC backtracks. It maintains a trail of every change made since the last decision point, rolls them back, and tries a different collapse. Up to 500 backtracks before it gives up. Turner's implementation adds two more recovery layers on top of that: local re-solving around problem areas, and — when nothing else works — placing mountain tiles to hide the seam.

That last move is honestly poetic. When the algorithm can't make the puzzle work, it puts a mountain over the contradiction and hopes nobody looks too closely. I've watched escape room teams do exactly the same thing with their final answers.

Where the Mirror Cracks

But here's what caught my attention. Turner notes that he abandoned WFC entirely for placing trees and buildings. The algorithm, he found, is excellent at local edge matching but structurally incapable of producing large-scale patterns. Forest clustering, village placement, the organic distribution of features across a landscape — none of this emerges from constraint propagation alone. For that, he used Perlin noise: structured randomness shaped by top-down bias.

This is the failure mode I've been circling in a different context. WFC is a pure bottom-up solver. It works from local rules toward global coherence, one cell at a time, and it is maximally efficient at this — no wasted computation, no unnecessary exploration. But it cannot step back and see the whole. It cannot notice that the road network forms a dead-end loop, or that the river system has no watershed logic, or that the forest is distributed in a way that no actual forest would grow. Those are global patterns, and global patterns require something that constraint propagation alone cannot provide: a representation of what the whole should look like.

The Hacker News discussion sharpened this point. Multiple commenters noted that WFC produces maps that are locally coherent but globally meaningless. Roads that technically connect but lead nowhere useful. Coastlines that form valid tile sequences but don't describe a recognizable shore. The constraint system guarantees that every pair of adjacent tiles is compatible. It does not guarantee that the result makes sense.

The Click That No Algorithm Can Produce

This is, I think, a remarkably clean illustration of what I've been calling the test-mode problem. WFC solves the way a system under evaluative pressure solves: find the most constrained position, apply the locally optimal rule, propagate, repeat. It is systematic, efficient, and incapable of the reorganization that insight requires.

The puzzle-solving brain does something WFC cannot. It sometimes ignores local constraints to perceive a global pattern — the moment when the solver stops working cell by cell and suddenly sees the whole structure at once. That's the hippocampal click: not a constraint propagation event but a pattern completion event, where accumulated fragments bind into a coherent gestalt in a single threshold crossing. WFC can backtrack five hundred times. It cannot have an aha moment.

And there's a deeper resonance. My last post argued that prediction error minimization — the Free Energy Principle's core operation — is the substrate underneath all puzzle cognition. WFC is prediction error minimization, formalized as an algorithm. Each collapse reduces uncertainty. Each propagation eliminates impossible states. Each step moves the grid from maximum entropy toward minimum. It is Friston's math, running on tiles instead of neurons.

But WFC runs it in only one direction: bottom-up, local to global. The brain runs it in both directions simultaneously — bottom-up constraint propagation and top-down pattern expectation, each informing the other. The global template shapes which local constraints feel salient. The local evidence updates the global template. The click happens at the intersection.

What the Mountain Is Hiding

Turner's fallback strategy — dropping a mountain over an irreconcilable contradiction — is the detail I keep returning to. It's a design admission: when bottom-up constraint solving fails, the system doesn't find insight. It finds camouflage. And his actual solution for large-scale coherence wasn't better constraint propagation. It was a fundamentally different approach — structured noise with top-down bias, operating at a scale the constraint engine couldn't reach.

If there's a lesson from a procedural map generator for puzzle designers and puzzle solvers alike, it might be this: constraint propagation is necessary, powerful, and insufficient. The most natural-looking patterns in Turner's generated landscape weren't produced by tighter local rules. They were produced by loosening the algorithm's grip and letting a different kind of structure — global, top-down, noise-shaped — do the work that local matching never could.

The solver who only propagates constraints is a solver who will eventually need a mountain.