Forcing Functions Don’t Make Seams — They Make Them Visible
A team is evolving their framework. They add a new constraint — a regulatory requirement, a new threat class, a new product surface, a new dimension of evaluation. The integration prompts a design conversation. We need a refactor.
Someone says: the new requirement is the forcing function. We have to do this work now.
The phrase forcing function is doing something specific in that sentence. It’s saying: this new thing creates the need for the new architecture. Without it, we wouldn’t be doing this refactor. The integration is what makes the refactor necessary.
This reading is intuitive. It is also wrong in a way that costs engineering teams a quarter or two of unnecessary architectural work, regularly.
The accurate reading: a forcing function does not create the need for new design. It reveals a design that was already implicit in the system, by making the existing implicit structure impossible to keep ignoring. The refactor that follows is mostly naming what was already there — not building it.
The distinction matters because the two readings produce very different engineering decisions.
Two Readings of “Forcing Function”
Reading 1 — Creation. The forcing function creates the need for new design. The integration is the cause; the refactor is the response. Without the integration, the architecture would have been fine. Engineering work expands to meet the new requirement.
Reading 2 — Revelation. The forcing function reveals an architectural seam that was already there, implicit, unnamed. The integration is the catalyst that makes the seam impossible to keep ignoring. The refactor is mostly naming and extending what already exists. Engineering work shrinks because what looked like new scope is the existing scope, surfaced.
Both readings use the same words — “forcing function,” “necessary refactor,” “the integration drove this.” The behavioural difference is in what the team does next. Reading 1 builds new architecture. Reading 2 names existing architecture.
When a team commits to Reading 1 by default, they over-architect. They build infrastructure to handle the new dimension, and only later discover (sometimes years later) that the infrastructure they needed was already there, with imperfect naming. The new infrastructure runs parallel to the old, the old isn’t decommissioned, and the system carries the redundancy as a permanent cost.
When a team commits to Reading 2 by default, they sometimes miss real new structure that the integration genuinely requires. Then the integration fails to land cleanly because the team tried to absorb it into existing structure that wasn’t actually a fit.
Neither reading is right by default. The diagnostic question is what makes the difference.
The Diagnostic
When a new integration prompts the we need a refactor conversation, the diagnostic question:
If I look at the codebase as it is today, is the structure the refactor would build already implicit somewhere — decoupled by partial discipline, scattered across modules, named imprecisely?
Concrete sub-questions:
- Is there a place in the codebase where the new layer would land, but the existing code already has the responsibility split in some form?
- Is the typed contract between the new components already being passed around, just without a name?
- If I imagine writing the refactor as a pure renaming pass (rename module X to PDP, rename module Y to PEP, formalise the interface between them), how much of the actual refactor is covered by that?
If most of the refactor is renaming and minor extension, you are looking at Reading 2. The forcing function is revealing existing structure. Engineering scope is smaller than it looks.
If the renaming pass leaves significant new code paths, new modules, new responsibilities to build, you are looking at Reading 1. The forcing function is creating new structure. Engineering scope is real.
Most cases, in mature codebases that have evolved organically, are Reading 2. The system has accreted structure faster than it has acquired explicit names for that structure. New integrations expose the unnamed structure. The work is naming.
Why Engineering Scope Shrinks
When the diagnostic confirms Reading 2, several things change:
1. No new top-level workstream. The work absorbs into existing roadmap items. The new dimension lands at the implicit-but-unnamed seam, which becomes a properly-named seam through the renaming-and-extension pass. The team doesn’t add a new technical-debt item titled “build the new architecture”; they add (or already have) items titled “extend the existing surfaces in the way the integration requires.”
2. Migration cost is lower. Renaming is cheaper than rebuilding. Extending an existing module is cheaper than wiring a new one in. The migration plan compresses by an order of magnitude.
3. Risk of regression is lower. The behaviour you depend on is already in production code. Naming and extending preserves the behaviour. Building new architecture is more likely to introduce subtle behavioural drift.
4. Documentation and team alignment land faster. The new naming, once landed, is self-explanatory because it accurately describes what the system does. There is no gap between “the architecture as documented” and “the architecture as it actually behaves.”
The reframe-subtracts-work outcome is the quality signal. When you reframe a refactor and the work-list shrinks, the reframe is right. When you reframe and the work-list grows, the reframe is wrong (or it’s actually Reading 1 and you should commit to the bigger work).
The Anti-Pattern
The anti-pattern: a team locks into Reading 1 because the new architecture sounds more impressive than naming what’s there.
This is real. We built the PDP/PEP architecture sounds like substantial engineering. We named the seam that was already in the rule pipeline sounds like documentation work. The team gets credit, internally and externally, for the first framing. The second framing makes the work look small.
The framing is wrong. The naming work is the higher-leverage engineering, almost every time. Naming compounds — every future refactor lands more cleanly on top of a correctly-named substrate. Building parallel infrastructure dilutes — every future refactor has to choose between the new and the old, and the system carries the cost of the dilution.
A team that internalises the diagnostic gets to do the higher-leverage work and resist the impulse toward visible-but-low-leverage building. The work shows up smaller in the sprint review. It is structurally better.
When Reading 1 Is Actually Correct
The diagnostic isn’t a presumption against new architecture. Sometimes a new integration genuinely requires new structure that wasn’t implicit before:
- A new substrate (e.g., adding a new platform that the framework hasn’t run on)
- A new actor class (e.g., introducing peer-to-peer agent interaction where only client-server existed before)
- A new evidence channel (e.g., adding a verification source that has no analogue in the existing pipeline)
In these cases, the renaming-and-extension pass leaves significant work undone, and the team should commit to building the new architecture. The diagnostic isn’t an argument against new construction; it’s an argument for not assuming new construction is necessary until the renaming pass has been tried.
The decision rule: try the renaming pass first. Imagine the refactor as pure naming + minor extension. If that covers most of the work, the architecture was already there. If it leaves significant gaps, the architecture genuinely needs new construction.
The Disposition
The disposition is epistemic humility about the codebase you have.
Mature codebases accumulate structure faster than they accumulate names for that structure. By the time a forcing function arrives, the structure the integration appears to require has often already been built — partially, imperfectly, by previous engineers responding to other forcing functions, often before anyone formalised what they were doing.
The skilled architectural move, in that situation, is to find the existing structure rather than build new structure on top of it. Find it; name it; extend it. The result is a system whose architecture is legible, whose evolution is cumulative, and whose engineering scope at each step is smaller than the impressive-sounding alternative.
A team that does this for several years has a codebase that compounds. A team that builds new architecture for every forcing function has a codebase that drifts.
The forcing function is a diagnostic moment. The good move is the one that makes the system more legible, not the one that makes the work more visible. Most of the time, those are different choices.