You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
App Router layout persistence: Route Manifest and Navigation Planner architecture
What This Issue Is Really About
The flat keyed AppElements payload solved the first layout-persistence problem: the browser can merge layout, page, template, slot, and route entries instead of replacing the whole App Router tree.
That bridge should stay.
But the flat map must not become the router's brain.
Vinext now needs to move route meaning out of:
wire keys
missing payload entries
cache-key suffixes
mounted-slot headers
local activeNavigationId checks
runtime cache hits
hint-store reads
artifact-store object existence
The target architecture is not a giant rewrite. It is a controlled migration from implicit meaning to owned decisions:
flat payload transport
+ visible navigation lifecycle authority
+ minimal Route Manifest data
+ small pure NavigationPlanner
+ event-specific commit gate
+ semantic cache coherence, introduced narrowly
+ runtime profiles that execute approved work
+ React projection at the end
The discipline is:
No proof, no reuse.
No proof, no skip.
No proof, no visible commit.
No proof, use the safest event-specific fallback.
But that law has an important qualifier:
Safety fallback must be event-specific and measured.
It must not turn routine dynamic observations into universal cache misses.
Correctness is mandatory. Performance comes from narrow, proven reuse, not optimism.
Architecture Overview
flowchart TD
subgraph Legend["Who owns what (one job per box)"]
direction TB
LG["<table cellpadding='8' cellspacing='0' style='border-collapse:collapse;width:640px'><tr><td align='left' style='width:240px;white-space:nowrap'><b>Route Manifest builder</b></td><td align='left' style='white-space:nowrap'>what can exist</td></tr><tr><td align='left' style='white-space:nowrap'><b>Visible route state + snapshot</b></td><td align='left' style='white-space:nowrap'>what is currently visible</td></tr><tr><td align='left' style='white-space:nowrap'><b>Event</b></td><td align='left' style='white-space:nowrap'>what happened</td></tr><tr><td align='left' style='white-space:nowrap'><b>NavigationPlanner</b></td><td align='left' style='white-space:nowrap'>what event means</td></tr><tr><td align='left' style='white-space:nowrap'><b>Navigation commit gate</b></td><td align='left' style='white-space:nowrap'>may become visible?</td></tr><tr><td align='left' style='white-space:nowrap'><b>Approved commit</b></td><td align='left' style='white-space:nowrap'>mutates browser / store</td></tr><tr><td align='left' style='white-space:nowrap'><b>Cache/reuse coordinator</b></td><td align='left' style='white-space:nowrap'>proves reuse</td></tr><tr><td align='left' style='white-space:nowrap'><b>Encode flat payload</b></td><td align='left' style='white-space:nowrap'>transport only</td></tr><tr><td align='left' style='white-space:nowrap'><b>Build React tree</b></td><td align='left' style='white-space:nowrap'>final projection only</td></tr></table>"]
end
subgraph Build["[1] Build time (once per deploy)"]
direction TB
A["app/ filesystem"] --> B["Route Manifest builder"]
B --> C[("RouteManifest<br/>+ compiled route graph")]
B --> G[("Resource dependency map")]
end
subgraph Decision["[2] Per-event decision (pure)"]
direction TB
EV["Event<br/>soft navigation · refresh ·<br/>back/forward · prefetch ·<br/>server action · render result"]
S[("Visible route state<br/>+ route snapshot<br/>+ commit version")]
K{{"NavigationPlanner"}}
R["Candidate route decision<br/>(navigation kind · requested work ·<br/>visible proposal · trace)"]
EV --> K
S --> K
K --> R
end
L{{"Navigation<br/>commit gate"}}
Reject(["reject /<br/>cache-seed only"])
Hard(["hard navigate"])
subgraph Exec["[3] Approved execution"]
direction TB
AC["Approved commit"]
BD["Update browser state<br/>URL · history · scroll · focus"]
ES[("Payload store")]
P["Build React tree"]
UI(["Visible UI"])
IO["Ask server/cache<br/>for missing payloads"]
AC --> BD --> UI
AC --> ES --> P --> UI
AC --> IO
end
subgraph Server["[4] Server/cache work"]
direction TB
SR["Render or materialize<br/>RSC payloads"]
WE["Encode flat payload"]
SR --> WE
end
subgraph Runtime["[5] Runtime profile boundary"]
direction TB
CC["Cache/reuse coordinator"]
RC[("Runtime cache<br/>hot local layer")]
AS[("Artifact store<br/>immutable payloads")]
CO[("Coherence coordinator<br/>epochs / invalidation floors")]
BJ["Background jobs<br/>revalidate / cleanup"]
HS[("Hints/config store<br/>not authority")]
CC --> RC & AS & CO & BJ
CC -.->|read-only hints| HS
end
C --> K
G --> K
G --> CC
G --> L
R --> L
L -- approve --> AC
L -- reject --> Reject
L -- cross-root / incompatible graph --> Hard
IO --> CC
CC -- cache miss / render needed --> SR
CC -- compatible cached payload --> WE
WE ===|"network<br/>(flat payload boundary)"| ES
classDef pure fill:#7eb3ec,stroke:#1e5fc4,stroke-width:2px,color:#0a2540
classDef gate fill:#f5b876,stroke:#b8530a,stroke-width:3px,color:#3d1f00
classDef visible fill:#7ed197,stroke:#1ea344,stroke-width:2px,color:#0a3d1c
classDef storage fill:#e8d5f0,stroke:#7b3aa3,stroke-width:2px,color:#2d1040
classDef terminal fill:#f0d4d4,stroke:#a33a3a,stroke-width:2px,color:#400a0a
classDef legend fill:#fafafa,stroke:#888,stroke-width:1px,color:#222,text-align:left
class K,R pure
class L gate
class AC,BD,P,UI,IO visible
class C,G,S,ES,RC,AS,CO,HS storage
class Reject,Hard terminal
class LG legend
Loading
Current Status
The first layout-persistence milestone has landed.
The first implementation step is not to replace this system. It is to make these existing ownership seams explicit, typed, and testable.
Main Architectural Concerns From Adversarial Review
1. Scope Risk
This issue must not be treated as a single implementation epic that lands compiler facts, lifecycle, cache coherence, runtime storage, skip transport, streaming, and Activity preservation together.
The sane path is:
Build the thinnest end-to-end router spine that preserves today's behavior.
Then promote one semantic decision at a time through the full path.
Avoid PRs that only add large unused type systems. Early PRs must either:
freeze current behavior
introduce an enforceable boundary
move one existing behavior through the new ownership path
or delete an old semantic writer
2. NavigationPlanner Must Stay Small
navigationPlanner.plan() should be a small semantic planner, not the whole router.
It can decide:
navigation kind
root-layout transition intent
route/slot/topology intent from compiled facts
which work must be requested
which visible proposal is being considered
how an observed async result should be interpreted
why the decision was made
It must not do:
fetches
React state updates
URL/history mutation
cache reads/writes
server-action execution
runtime binding access
transition promise settlement
final stale-commit approval
A pure reducer is only realistic if async facts re-enter as explicit events.
The model is two phase:
Phase 1: user/runtime intent event
navigate / refresh / traverse / prefetch / serverActionSubmitted
-> planner proposes operation and requested work
Phase 2: observed result event
flightResponseArrived / renderOutcomeObserved / actionReturned / streamChunkArrived
-> planner interprets the result as commit proposal, noCommit, hardNavigate, cache-seed only, or terminal outcome
-> lifecycle gate decides whether it is still allowed to become visible
The planner must not pretend to know render-time facts before render. Redirects, notFound, boundary errors, dynamic request API reads, cacheability downgrades, stream failures, and server-action revalidation effects are observed outcomes that feed back into the planner as events.
3. Lifecycle Authority Comes First
The most concrete current pressure is stale async work committing visible state.
activeNavigationId is not strong enough for same-URL refresh and server-action races. Same URL does not mean same visible world. Vinext needs a visibleCommitVersion and one lifecycle owner before broad compiler/cache work.
The lifecycle controller should wrap the existing candidate-commit seam first:
The first controller should consolidate existing behavior. It should not rewrite all router semantics.
4. Safe Fallbacks Can Become A Performance Trap
The law says uncertainty must not degrade to reuse. That is correct.
But a naive implementation can collapse cache hit rate by treating routine runtime observations as global uncertainty. For example, a route that reads a generic header or cookie should not automatically poison every cache class unless that input actually affects the output and can be modeled safely.
Rules:
runtime observations downgrade only the output they affect
public cache dimensions must be allowlisted, canonical, bounded, and redacted
unknown private/auth/draft/session inputs degrade to private, uncacheable, or fresh render
fresh render is one fallback, not the universal fallback
cache-hit rate and variant cardinality must be measured before broad rollout
Do not optimize with probabilistic reuse. Do optimize by keeping proof scopes narrow.
5. Skip Transport Can Backfire
ClientReuseManifest is an untrusted hint. It must not become a server CPU or storage-IO amplifier.
Skip transport must start with the cheapest proven class, likely static layout entries:
same graph/deployment compatibility
same root boundary
same route/topology identity
compatible params/search/interception/mounted-slot context
prior render observation says no dynamic request API usage
no incompatible boundary outcome
local metadata is enough to verify
Hard rule:
If verifying skip requires more work than rendering and sending the entry, do not skip.
Abuse limits are protocol requirements:
maximum manifest byte size
maximum entry count
canonical ordering
bounded hash algorithm
replay window / visibleCommitVersion compatibility
private-entry rejection
unknown-entry ignore path
trace reason for every rejected manifest entry
Skip transport is a later optimization, not a prerequisite for the router spine.
6. Deployment Compatibility Must Be Algebra, Not Strict Equality Everywhere
Strict graphVersion and deploymentVersion equality is too brittle for rolling deploys, multi-region edges, previews, canaries, and rollback windows.
The architecture needs a deployment compatibility protocol before hard-navigation decisions depend on version mismatches.
Required concepts:
graph version
asset/deployment version
compatibility map generated at build time
client/server handshake
old-client/new-server fallback
new-client/old-server fallback
hard-navigation loop prevention
asset pinning for in-flight RSC payloads
controlled cache miss or render-fresh path when compatibility is unknown
A version mismatch may require a hard navigation in some cases. It must not produce reload loops or break SPA behavior during normal rolling deploys.
7. Runtime Profile Must Not Own Semantics
Cloudflare Workers are the primary production target for vinext.
That matters. The semantic core must be binding-free. Cloudflare-specific storage, Durable Objects, KV, Cache API, Queues, or R2 integration belongs behind a runtime profile boundary.
Runtime storage hits are not semantic proof by themselves.
8. Streaming And Activity Are Follow-Up Specs
Streaming chunks and Activity/hidden-route preservation are real future needs, but they should not block the lifecycle spine.
Before streaming reveal is implemented, the architecture must specify:
chunk terminal settlement
stale chunk recovery
boundary ownership generation
cache-seed-only eligibility
no permanently hung Suspense boundary when a chunk is discarded
Before Activity preservation is implemented, it must specify:
Until then, these remain out of scope for the first migration stage.
9. Correctness Oracle Without Process Theater
Every semantic PR must name its oracle:
Next public semantics conformance
Vinext internal invariant
Intentional documented divergence
This is required because vinext aims to match Next public behavior unless a divergence is deliberate.
But do not require full NavigationTrace, cache cardinality proof, and runtime coherence proof for every small PR. The process should scale with the semantic risk.
Minimal early requirement:
what old writer is being deleted
what new owner writes the decision
what oracle defines correctness
what hostile sequence or boundary case is covered
what fallback occurs on uncertainty
10. Lock Criteria From Final Review
Lock the direction, not every future type shape. The architecture is only useful if future PRs cannot rebuild the old implicit router under cleaner names.
Non-negotiable lock criteria:
visible state mutation happens only through an approved visible commit transaction
visibleCommitVersion increments in exactly one place
NavigationPlanner v0 stays narrow: requestWork, proposeCommit, noCommit, hardNavigate
raw AppElementsWire keys are fenced by constructors/parsers and import-boundary checks
unknown root-layout identity is traced as legacy fallback or uncertainty, not proof of safe merge
reusable output has positive and negative render-observation proof for the relevant scope
CacheVariant dimensions have hard budgets and a breaker path
every reusable/skippable artifact carries a compatibility envelope before cache/skip depend on it
NavigationTrace uses compact reason codes and structured fields, not narrative logs
semantic PRs delete the old writer for the promoted path in the same PR
v0 does not claim full Cache Components or Activity hidden-route preservation
Principal Hardening Addendum
This section is not more architecture. It is the discipline that keeps the architecture from becoming a new implicit router under cleaner names.
Rejected Alternatives
These paths are deliberately rejected:
AppElementsWire as semantic source of truth
Rejected because wire keys are transport compatibility, not route meaning.
They may preserve legacy payload shape while promoted decisions move to owned route state.
Full compiled transition automaton in v0
Rejected because it would front-load future semantics before one current path proves the spine.
The compiler owns facts; the planner owns event interpretation.
Runtime cache or artifact-store hits as semantic proof
Rejected because storage presence does not prove route topology, privacy, cacheability, or compatibility.
Reuse needs explicit compatibility and render-observation proof.
Strict graph/deployment equality everywhere
Rejected because rolling deploys, multi-region edges, previews, canaries, and rollbacks need algebraic compatibility outcomes.
Unknown compatibility must fall back safely, not blindly reload or reuse.
Streaming, Activity, or broad skip transport before the spine
Rejected because these require lifecycle ownership, compatibility envelopes, and cache proof first.
They remain follow-up specs until the base invariants are enforceable.
Future PRs should not reintroduce these rejected shapes under different names without updating this issue with the new evidence.
Machine-Enforced Boundaries
Every durable "must not" needs a structural guard, not just reviewer memory.
Rule: AppElementsWire is transport only.
Enforcement: branded constructors/parsers, import-boundary checks, and raw-key construction tests.
Rule: visible state mutates only through ApprovedVisibleCommit.
Enforcement: single exported writer, visibleCommitVersion single-owner tests, and no raw reducer export.
Rule: NavigationPlanner does not execute effects.
Enforcement: planner module has no fetch, cache, browser, runtime binding, or React state dependencies.
Rule: runtime profile does not own route meaning.
Enforcement: semantic core imports runtime contracts only through typed ports; runtime modules cannot construct route decisions.
Rule: cache reuse requires proof.
Enforcement: ReuseProof / CacheVariant construction is owned by the cache coordinator, with breaker fallback tests for missing or over-budget proof.
Rule: promoted semantic paths delete the old writer.
Enforcement: each semantic PR names the deleted writer and includes a targeted search/check for the old failure shape.
Rule: generated code stays glue.
Enforcement: generated entries call typed contracts; they do not construct NavigationDecisionV0, BrowserDelta, CacheVariant, or ReuseProof directly.
If a rule cannot yet be enforced mechanically, the PR must say why and what later slice will make it enforceable.
Compatibility Outcomes And Rollback
Compatibility is an outcome protocol, not a boolean equality check.
unknown compatibility never implies reuse
freshRender is preferred over hardNavigate when it can preserve correctness
hardNavigate has loop prevention
old-client/new-server and new-client/old-server are tested as a matrix
asset/deployment pinning is defined for in-flight RSC payloads before payload compatibility depends on it
compatibility is transitive only for explicitly declared compatibility sets
Rollback and kill-switch requirements:
planner promotion can fall back per route or decision class while preserving old behaviour
cache reuse can be forced to freshRender for a route/cache class
skip transport can be disabled globally and per route without changing app code
unknown or bad RouteManifest compatibility can be quarantined to freshRender or hardNavigate with loop prevention
canaries may compare old and new decision traces without affecting visible users
rollback telemetry names the trace reason and compatibility outcome that triggered it
Threat Model And Operating Budgets
The risky systems are cache reuse, skip transport, compatibility envelopes, trace fields, and runtime profiles.
Threats that must stay visible:
cache poisoning
cross-user data leakage
header/cookie/secret exfiltration through cache dimensions or trace fields
CPU amplification from malicious ClientReuseManifest payloads
storage amplification from high-cardinality CacheVariant dimensions
replay of stale reuse hints across deploys
invalidation storms
multi-tenant isolation failures in runtime profiles
hard-navigation reload loops during deploy mismatch
Required defences:
canonicalization and redaction for public dimensions and traces
hard manifest byte and entry-count limits
bounded hash algorithms and canonical ordering
server-side verification for every skip/reuse hint
compatibility envelopes on every reusable/skippable artifact
replay windows tied to visibleCommitVersion / deployment compatibility where needed
per-route CacheVariant ceilings with breaker fallback
private/auth/draft/session downgrade paths that cannot publish public cache entries
runtime profile boundaries that keep bindings out of semantic decisions
Hard correctness budgets:
stale visible mutation tolerance: zero
visibleCommitVersion writers outside the lifecycle owner: zero
prefetch visible commits: zero
cache reuse without proof: zero
skip acceptance without server-side verification: zero
private/auth-sensitive public cache entries: zero
Performance budgets start as measured gates and become numeric only after baseline data exists. Until then, PRs that enable cache reuse, skip transport, or runtime profile IO must report the hot-path counters they affect and the fallback when the budget is exceeded.
Lifecycle Model And Debugger Boundary
The lifecycle model may be expressed as a small dev/test reference state machine:
It exists to validate transitions and explain traces. It must not duplicate route semantics.
Illegal transitions include:
older operation mutates visible state
prefetch becomes visible UI
same-URL server action clobbers a newer visible commit
hard navigation continues as a normal visible commit
rejected streaming chunk leaves a boundary permanently hung, once streaming exists
The NavigationTrace debugger is dev/test-only. It consumes events, decisions, traces, lifecycle state, and commit outcomes. It identifies whether a failure came from the planner, lifecycle gate, compatibility/cache proof, or commit transaction. It must not become a second planner, cache authority, or production observability system.
Core Ownership Model
1. AppElementsWire
Implementation name: AppElementsWire.
Owns:
serialization
deserialization
transport compatibility
wire-key encoding
RSC/HTML payload boundary
merge buffer compatibility while legacy paths remain
Promote SegmentOp, SlotOp, richer effect sets, and cache-specific decisions only when a real semantic slice needs them. Do not ship a fat reducer before one current path proves the spine.
NavigationTrace must be reason-code based: compact codes plus structured fields that explain the decision without becoming another router object graph.
The planner proposes. The lifecycle gate approves. The executor applies.
The planner must be boring at first. Boring is good here.
operation identity
operation lane
base visible commit version
operation token compatibility
terminal state
commit permission
transition promise settlement
visibleCommitVersion increment
RouterState update after successful commit
newer visible work supersedes older visible work
prefetch never commits visible UI
refresh is a real operation and can be superseded
same-URL commits use visibleCommitVersion, not URL alone
stale candidate commits must not patch visible state
server-action results are ordered by OperationToken and visibleCommitVersion
older action results may return values, invalidate, schedule refresh, or cache-seed only; they may not overwrite newer visible state
RSC redirects stay inside one operation lifecycle when possible
6. CommitDecision, ApprovedVisibleCommit, And BrowserDelta
Visible browser mutation must go through one approved commit transaction.
An approved visible commit is the only path that may mutate:
browser URL/history
visible RouterState
payload store
mounted-slot state
scroll/focus restoration
transition promise settlement
visibleCommitVersion
There must be exactly one place where visibleCommitVersion++ happens. Everything else is either candidate work, rejected work, hard navigation, or non-visible cache seeding.
Render observations are not side-channel decisions. They re-enter the planner/lifecycle/cache model as explicit events or result metadata.
Absence of a recorded dynamic read is not enough for reuse unless the render scope was observed completely enough to prove that absence. A reusable output needs a scoped bill of health: what it did observe, what it did not observe, what boundary outcome it produced, and which artifact envelope it belongs to.
What inputs, resources, tags, actions, deployment facts, and runtime observations can invalidate that equivalence?
Start narrow. Do not attempt full cache algebra before the lifecycle spine exists.
Cache v1 should cover:
graph/deployment compatibility
root/route/render identity
params/search where required
mounted-slot fingerprint where required
interception context where required
renderEpoch pairing for HTML/RSC
privacy/cacheability downgrade from render observations
boundary outcome compatibility
Mandatory cache dimension rules:
no raw cookies as public dimensions
no raw headers as public dimensions
no bearer tokens, session cookies, or secrets as public dimensions
no unbounded raw user input as public dimensions
public dimensions must be allowlisted, canonicalized, bounded, classified, and redacted in traces
Mandatory cache budget rules:
maximum encoded dimension length
maximum dimension count
maximum value count per dimension
canonical ordering
redacted trace representation
per-route variant ceiling
breaker fallback when the ceiling is exceeded
If a route exceeds its variant ceiling, the fallback is private, uncacheable, or fresh render for the affected output. Measurement is not enough without an enforcement path.
Every payload or artifact that may later participate in cache reuse or skip transport must carry compatibility metadata before those systems rely on it:
graph version
asset/deployment/build ID
payload schema version
route schema version
RSC protocol version
renderEpoch
root boundary
compatibility set
Cache coherence and skip transport can remain disabled at first. The envelope should land early so old-client/new-server, new-client/old-server, rolling deploy, canary, and rollback behavior has a protocol instead of a boolean equality check.
Cloudflare is the first production target, but the semantic core must not import raw runtime bindings.
Generic layers:
RequestMemo: per-request dedupe only
Process/Isolate microcache: opportunistic only
Response/artifact cache: hot local cache with semantic metadata
Artifact store: immutable HTML/RSC/payload artifacts
Coherence coordinator: invalidation floors, renderEpoch publication, pairing rules when needed
Background jobs: revalidation, prewarm, cleanup; idempotent by key
Hint/config store: read-mostly hints only; never correctness authority
Hot-path rule:
A static public hit must not require a distributed coordination round trip on every request.
Validity Rules
A reduction or runtime action is invalid if:
visible state mutates outside an ApprovedVisibleCommit
visibleCommitVersion increments from more than one owner
rootLayoutTransition = crossRoot and CommitDecision is not hardNavigate
BrowserDelta preserves a slot but RouterState cannot prove the slot exists or is retained
slot default is rendered but StaticSegmentGraph has no default for that SlotId
slot is marked unmatched but graph resolution found a matching slot route or default
route is intercepted but event/context is not interception-capable
layout is reused across a crossed root boundary
unknown root-layout identity is treated as proof of safe merge after root-layout transition is promoted
cache read occurs without CacheVariant compatibility
cache entry has compatible CacheVariant but incompatible dependency fingerprint or invalidation ownership
cache entry is reused without scoped proof that dynamic/private request APIs were not observed where that proof is required
CacheVariant exceeds the route budget and does not take the breaker fallback
artifact is cached, reused, or skipped without a compatible ArtifactCompatibilityEnvelope
HTML/RSC pair crosses renderEpoch incompatibly
successful RSC payload is paired with incompatible error/notFound/unauthorized HTML, or the reverse
cached notFound/forbidden/unauthorized boundary is reused as a successful route payload
private/auth/cookie/header/draft-sensitive output is cached as public
async result commits with stale visibleCommitVersion
older server-action result overwrites newer visible state
stream chunk mutates visible UI without lifecycle approval, once streaming exists
server accepts ClientReuseManifest entries without verifying graph, deployment, variant, epoch, payload hash, and invalidation compatibility, once skip exists
skip transport is enabled before verification cost and CacheVariant cardinality are measured and bounded
unbounded raw user input becomes a public CacheVariant dimension
runtime hint store or non-authoritative cache decides invalidation floor, current renderEpoch, HTML/RSC pairing, or private cache safety
generated code constructs NavigationDecisionV0, BrowserDelta, or CacheVariant directly instead of calling typed contracts
NavigationTrace relies on narrative logs instead of reason codes and structured fields for semantic decisions
wire-key absence is used as preserve/delete/default/skip proof after the relevant semantic decision has been promoted
uncertain cache/deployment/graph compatibility results in reuse instead of safe fallback
Navigation Kind Semantics
The planner must branch on explicit navigation kind, not infer it from payload shape.
hydrate:
may use previous state: SSR snapshot only
may preserve slots: only if SSR snapshot proves them
commit: initialise or recovery
soft navigate:
may use previous state: yes
may preserve slots: yes
may apply interception: yes
commit: soft commit
hard navigate:
may use previous state: no
commit: full document navigation
refresh:
may use previous state: yes
may preserve slots: yes
may apply interception: current-context only
commit: refresh commit
traverse / popstate:
may use previous state: yes, from history-derived state when available
may preserve slots: yes, when history/current state proves it
commit: traverse commit
prefetch:
may seed cache only
may not commit visible UI
server-action refresh:
may use previous state: yes
may preserve slots: yes
may apply interception: current-context only
commit: lifecycle-approved action refresh, redirect, noCommit, cache-seed only, or refresh-scheduled
Implementation Plan
This is not a fixed PR count. Split into small, reviewable PRs.
Governing sequence:
freeze today's flat-wire behavior with compatibility tests
fence AppElementsWire
add minimal compiled facts and compatibility envelope skeleton
add lifecycle transaction contract and reason-code trace shell
build the thin end-to-end spine for one current navigation path
promote root-layout hard navigation and delete the old writer for that path
only then promote slots, interception, actions, cache identity, and skip transport
Layer 0: Keep The Landed Foundation
Do not redo the flat payload milestone.
Keep:
flat keyed AppElements payload
layout/page/template/slot/route entries
browser merge/replace behavior
SSR/browser deserialization symmetry
root-layout hard navigation behavior, until promoted
absent-key soft-navigation preservation, until promoted
mounted-slot cache variants, until promoted
Layer 1: Lifecycle Spine First
Goal: make visible commit authority explicit while preserving behavior.
Deliverables:
OperationRecord
operation lanes
terminal states
visibleCommitVersion
ApprovedVisibleCommit transaction
one visibleCommitVersion increment owner
lifecycle approval barrier
pending promise settlement owned by lifecycle
same-URL/server-action stale commit rejection
prefetch cache-only lane
traverse/back-forward intent adapter where platform evidence exists
Acceptance:
newer navigation beats older RSC response
old RSC response can resolve late without committing visible state
prefetch can resolve late and seed cache only
server action resolving after newer visible commit cannot clobber the route
refresh can be superseded
RSC redirect chains keep one pending lifecycle
hard-navigation recovery only fires for the current operation
no-op back/forward cannot leave pending stuck
Layer 2: Fence AppElementsWire
Goal: stop semantic meaning from spreading through flat wire keys.
Deliverables:
AppElementsWire codec boundary
approved wire-key constructors/parsers
raw wire-key parsing restrictions
import-boundary checks
compatibility tests for current payload behavior
No semantic promotion yet.
Layer 3: Minimal Route Manifest
Goal: compile the facts needed for the first promotions.
full dependency graph
full cache algebra
transition automaton
skip transport
streaming protocol
Activity preservation
Layer 4: NavigationPlanner v0
Goal: route one existing navigation path through the new ownership boundaries while preserving behavior.
Deliverables:
navigationPlanner.plan() for navigate / refresh / traverse / flightResponseArrived
NavigationDecisionV0 only: requestWork / proposeCommit / noCommit / hardNavigate
CommitProposal
ApprovedVisibleCommit handoff
reason-code NavigationTrace
minimal invariant checker
current AppElementsWire emitted from the new path
old path remains only as compatibility fallback for unpromoted paths
The planner must remain small and pure. Async facts re-enter as events. The invariant checker/debugger validates allowed transitions and trace shape; it must not duplicate route semantics.
Layer 5: Promote First Semantic Decisions
Promote one decision area per PR and delete the old writer in the same PR.
Goal: reduce server work and bytes only where proof is cheap.
Initial eligible class:
static layout entries only
same compatible graph/deployment/root
compatible route params/search/interception/mounted-slot context
no dynamic request API observation
no incompatible boundary outcome
local metadata verification only
Deliverables:
ClientReuseManifest as untrusted hint
manifest abuse limits
ServerRenderPlan
ReuseProof
verification-cost budget
trace reason for every accepted/rejected skip
fallback to render when proof is unavailable or expensive
Layer 8: Runtime Profile v1
Goal: keep runtime execution behind typed contracts after the semantic spine exists.
Deliverables:
runtime cache interface
artifact store interface
coherence coordinator interface, only where required
background job interface with idempotency keys
Cloudflare profile as first implementation
runtime hot-path metrics
The semantic core stays binding-free.
Later Layers: Streaming And Activity
Separate follow-up specs required for:
operation-tokened stream chunks
reveal boundary ownership
stale chunk recovery and settlement
hidden route model
Activity memory/eviction/auth/effect/focus policies
Do not implement these before lifecycle authority, Route Manifest data, and cache coherence v1 exist.
Test Strategy
The most important tests are sequence tests and hostile timelines, not giant snapshots.
A -> B -> C, with B resolving last
prefetch A
navigate B
old A prefetch resolves
server action resolves after same-URL refresh
server actions A and B resolve out of order
back/back/forward while RSC responses are pending
cross-root navigation while old result arrives
slot mounted
slot unmatched
refresh
action redirect
HTML cache hit but RSC cache miss
RSC cache hit but HTML stale
deployment A client receives deployment B payload
Required invariants:
No stale operation commits visible state.
No visible state mutates outside ApprovedVisibleCommit.
No visibleCommitVersion increment exists outside the lifecycle owner.
No older server-action result overwrites newer visible state.
No cache entry is read without CacheVariant compatibility.
No cache entry is reused without required positive and negative RenderObservation proof.
No cache variant ceiling overflow reuses public cache.
No cached/reused/skipped artifact lacks a compatible ArtifactCompatibilityEnvelope.
No HTML/RSC pair crosses renderEpoch incompatibly.
No layout is reused across root-layout boundary.
No slot is preserved unless RouterState proves it existed, once promoted.
No interception applies without an interception-capable context, once promoted.
No AppElementsWire absence has semantic meaning after the relevant decision is promoted.
No private/cookie/auth-sensitive response is cached under a public variant.
No runtime hint-store value is treated as authoritative invalidation or renderEpoch state.
No ClientReuseManifest entry is trusted without server-side verification, once skip exists.
No uncertain graph/deployment/cache state results in reuse.
Every semantic change names its oracle.
Hot Path Budget
Track performance by route/cache class. Do not claim the architecture is faster merely because it is more formal.
At minimum, measure:
runtime cache reads/writes
coordinator calls
artifact store reads/writes
background job writes and duplicate/idempotent replays
runtime subrequests / external IO
server CPU time
RSC bytes sent
layouts rendered
client remount count
p95/p99 navigation latency
cache variant cardinality
ElementStore memory
stale commit rejection count
skip verification cost versus render cost
hard-navigation count caused by deployment mismatch
cache variant breaker count
compatibility-envelope mismatch count
trace reason-code distribution
Operating limits:
static layout-preserving planner path: zero external IO
hot cache hit: bounded runtime profile calls, no distributed coordination round trip by default
skip transport: verification cost must be cheaper than render-and-send, otherwise render/send
CacheVariant: hard per-route cardinality ceiling with breaker fallback
visible commit correctness: stale visible mutation tolerance is zero
Performance claims should be based on work avoided:
fewer layout renders
fewer RSC bytes
fewer remounts
lower p95/p99 latency
bounded runtime/coordinator calls on hot paths
Success Criteria
This migration succeeds when:
The flat payload wire format is transport, not semantic authority.
Visible commits go through one lifecycle-approved atomic transaction.
Same-URL/server-action stale commits are rejected, cache-seeded only, or translated into explicit refresh behavior.
NavigationPlanner is small, pure, event-driven, and v0 exposes only requestWork/proposeCommit/noCommit/hardNavigate.
Async render/server-action outcomes feed back as explicit events or result metadata.
Every promoted route-semantic decision has one owner and the old writer is deleted.
Every applied commit has explicit transitionMode and lifecycle-approved fallback path.
Every cache read uses CacheVariant compatibility.
Every cache write uses render observations to decide privacy/cacheability.
Every reused output has scoped positive and negative render-observation proof where reuse depends on absence.
Every CacheVariant has enforced budgets and a breaker fallback.
Every cached, reused, or skipped artifact has a compatible artifact envelope.
Every HTML/RSC pair has compatible renderEpoch.
Every skip is explained by cheap graph/cache/dependency proof and is bounded by abuse limits.
Deployment/version mismatch has a compatibility protocol and cannot produce hard-navigation loops.
Cloudflare runtime execution stays behind a profile boundary.
Generated code calls typed contracts and does not recreate semantic decisions.
NavigationTrace uses compact reason codes and structured fields.
Missing payload, missing wire key, missing branch, and cache miss do not imply semantic preservation after the relevant decision is promoted.
Out Of Scope For The First Migration Stage
Do not add yet:
full generated transition automaton
separate reference interpreter
global strong cache consistency for all content
complete Cloudflare storage profile before the semantic spine exists
new public router API surface
broad skip transport
custom stream chunk protocol
Activity / hidden route preservation
full Cache Components parity
stronger-than-Next performance claims before benchmarks prove them
This architecture is not trying to make a prettier flat map.
It is trying to make the router less magical without making it slower, more brittle, or impossible to ship.
The NavigationPlanner should be small. Lifecycle authority should land early. Cache proof should start narrow. Skip transport should prove that it saves work before it is trusted. Deployment compatibility must be designed for rolling edges. Runtime profiles execute approved work; they do not own route meaning.
The discipline remains:
No proof, no reuse.
No proof, no skip.
No proof, no visible commit.
No proof, use the safest event-specific fallback.
The implementation discipline is just as important:
Thin spine first.
One semantic promotion at a time.
Delete the old writer when the new owner lands.
Measure before claiming performance.
App Router layout persistence: Route Manifest and Navigation Planner architecture
What This Issue Is Really About
The flat keyed
AppElementspayload solved the first layout-persistence problem: the browser can merge layout, page, template, slot, and route entries instead of replacing the whole App Router tree.That bridge should stay.
But the flat map must not become the router's brain.
Vinext now needs to move route meaning out of:
The target architecture is not a giant rewrite. It is a controlled migration from implicit meaning to owned decisions:
The discipline is:
But that law has an important qualifier:
Correctness is mandatory. Performance comes from narrow, proven reuse, not optimism.
Architecture Overview
flowchart TD subgraph Legend["Who owns what (one job per box)"] direction TB LG["<table cellpadding='8' cellspacing='0' style='border-collapse:collapse;width:640px'><tr><td align='left' style='width:240px;white-space:nowrap'><b>Route Manifest builder</b></td><td align='left' style='white-space:nowrap'>what can exist</td></tr><tr><td align='left' style='white-space:nowrap'><b>Visible route state + snapshot</b></td><td align='left' style='white-space:nowrap'>what is currently visible</td></tr><tr><td align='left' style='white-space:nowrap'><b>Event</b></td><td align='left' style='white-space:nowrap'>what happened</td></tr><tr><td align='left' style='white-space:nowrap'><b>NavigationPlanner</b></td><td align='left' style='white-space:nowrap'>what event means</td></tr><tr><td align='left' style='white-space:nowrap'><b>Navigation commit gate</b></td><td align='left' style='white-space:nowrap'>may become visible?</td></tr><tr><td align='left' style='white-space:nowrap'><b>Approved commit</b></td><td align='left' style='white-space:nowrap'>mutates browser / store</td></tr><tr><td align='left' style='white-space:nowrap'><b>Cache/reuse coordinator</b></td><td align='left' style='white-space:nowrap'>proves reuse</td></tr><tr><td align='left' style='white-space:nowrap'><b>Encode flat payload</b></td><td align='left' style='white-space:nowrap'>transport only</td></tr><tr><td align='left' style='white-space:nowrap'><b>Build React tree</b></td><td align='left' style='white-space:nowrap'>final projection only</td></tr></table>"] end subgraph Build["[1] Build time (once per deploy)"] direction TB A["app/ filesystem"] --> B["Route Manifest builder"] B --> C[("RouteManifest<br/>+ compiled route graph")] B --> G[("Resource dependency map")] end subgraph Decision["[2] Per-event decision (pure)"] direction TB EV["Event<br/>soft navigation · refresh ·<br/>back/forward · prefetch ·<br/>server action · render result"] S[("Visible route state<br/>+ route snapshot<br/>+ commit version")] K{{"NavigationPlanner"}} R["Candidate route decision<br/>(navigation kind · requested work ·<br/>visible proposal · trace)"] EV --> K S --> K K --> R end L{{"Navigation<br/>commit gate"}} Reject(["reject /<br/>cache-seed only"]) Hard(["hard navigate"]) subgraph Exec["[3] Approved execution"] direction TB AC["Approved commit"] BD["Update browser state<br/>URL · history · scroll · focus"] ES[("Payload store")] P["Build React tree"] UI(["Visible UI"]) IO["Ask server/cache<br/>for missing payloads"] AC --> BD --> UI AC --> ES --> P --> UI AC --> IO end subgraph Server["[4] Server/cache work"] direction TB SR["Render or materialize<br/>RSC payloads"] WE["Encode flat payload"] SR --> WE end subgraph Runtime["[5] Runtime profile boundary"] direction TB CC["Cache/reuse coordinator"] RC[("Runtime cache<br/>hot local layer")] AS[("Artifact store<br/>immutable payloads")] CO[("Coherence coordinator<br/>epochs / invalidation floors")] BJ["Background jobs<br/>revalidate / cleanup"] HS[("Hints/config store<br/>not authority")] CC --> RC & AS & CO & BJ CC -.->|read-only hints| HS end C --> K G --> K G --> CC G --> L R --> L L -- approve --> AC L -- reject --> Reject L -- cross-root / incompatible graph --> Hard IO --> CC CC -- cache miss / render needed --> SR CC -- compatible cached payload --> WE WE ===|"network<br/>(flat payload boundary)"| ES classDef pure fill:#7eb3ec,stroke:#1e5fc4,stroke-width:2px,color:#0a2540 classDef gate fill:#f5b876,stroke:#b8530a,stroke-width:3px,color:#3d1f00 classDef visible fill:#7ed197,stroke:#1ea344,stroke-width:2px,color:#0a3d1c classDef storage fill:#e8d5f0,stroke:#7b3aa3,stroke-width:2px,color:#2d1040 classDef terminal fill:#f0d4d4,stroke:#a33a3a,stroke-width:2px,color:#400a0a classDef legend fill:#fafafa,stroke:#888,stroke-width:1px,color:#222,text-align:left class K,R pure class L gate class AC,BD,P,UI,IO visible class C,G,S,ES,RC,AS,CO,HS storage class Reject,Hard terminal class LG legendCurrent Status
The first layout-persistence milestone has landed.
Vinext currently has:
The current code also already contains the future architecture in scattered form:
The first implementation step is not to replace this system. It is to make these existing ownership seams explicit, typed, and testable.
Main Architectural Concerns From Adversarial Review
1. Scope Risk
This issue must not be treated as a single implementation epic that lands compiler facts, lifecycle, cache coherence, runtime storage, skip transport, streaming, and Activity preservation together.
The sane path is:
Avoid PRs that only add large unused type systems. Early PRs must either:
2. NavigationPlanner Must Stay Small
navigationPlanner.plan()should be a small semantic planner, not the whole router.It can decide:
It must not do:
A pure reducer is only realistic if async facts re-enter as explicit events.
The model is two phase:
The planner must not pretend to know render-time facts before render. Redirects,
notFound, boundary errors, dynamic request API reads, cacheability downgrades, stream failures, and server-action revalidation effects are observed outcomes that feed back into the planner as events.3. Lifecycle Authority Comes First
The most concrete current pressure is stale async work committing visible state.
activeNavigationIdis not strong enough for same-URL refresh and server-action races. Same URL does not mean same visible world. Vinext needs avisibleCommitVersionand one lifecycle owner before broad compiler/cache work.The lifecycle controller should wrap the existing candidate-commit seam first:
The first controller should consolidate existing behavior. It should not rewrite all router semantics.
4. Safe Fallbacks Can Become A Performance Trap
The law says uncertainty must not degrade to reuse. That is correct.
But a naive implementation can collapse cache hit rate by treating routine runtime observations as global uncertainty. For example, a route that reads a generic header or cookie should not automatically poison every cache class unless that input actually affects the output and can be modeled safely.
Rules:
Do not optimize with probabilistic reuse. Do optimize by keeping proof scopes narrow.
5. Skip Transport Can Backfire
ClientReuseManifestis an untrusted hint. It must not become a server CPU or storage-IO amplifier.Skip transport must start with the cheapest proven class, likely static layout entries:
Hard rule:
Abuse limits are protocol requirements:
Skip transport is a later optimization, not a prerequisite for the router spine.
6. Deployment Compatibility Must Be Algebra, Not Strict Equality Everywhere
Strict
graphVersionanddeploymentVersionequality is too brittle for rolling deploys, multi-region edges, previews, canaries, and rollback windows.The architecture needs a deployment compatibility protocol before hard-navigation decisions depend on version mismatches.
Required concepts:
A version mismatch may require a hard navigation in some cases. It must not produce reload loops or break SPA behavior during normal rolling deploys.
7. Runtime Profile Must Not Own Semantics
Cloudflare Workers are the primary production target for vinext.
That matters. The semantic core must be binding-free. Cloudflare-specific storage, Durable Objects, KV, Cache API, Queues, or R2 integration belongs behind a runtime profile boundary.
Runtime profile may execute:
Runtime profile must not decide:
Runtime storage hits are not semantic proof by themselves.
8. Streaming And Activity Are Follow-Up Specs
Streaming chunks and Activity/hidden-route preservation are real future needs, but they should not block the lifecycle spine.
Before streaming reveal is implemented, the architecture must specify:
Before Activity preservation is implemented, it must specify:
Until then, these remain out of scope for the first migration stage.
9. Correctness Oracle Without Process Theater
Every semantic PR must name its oracle:
This is required because vinext aims to match Next public behavior unless a divergence is deliberate.
But do not require full
NavigationTrace, cache cardinality proof, and runtime coherence proof for every small PR. The process should scale with the semantic risk.Minimal early requirement:
10. Lock Criteria From Final Review
Lock the direction, not every future type shape. The architecture is only useful if future PRs cannot rebuild the old implicit router under cleaner names.
Non-negotiable lock criteria:
Principal Hardening Addendum
This section is not more architecture. It is the discipline that keeps the architecture from becoming a new implicit router under cleaner names.
Rejected Alternatives
These paths are deliberately rejected:
Future PRs should not reintroduce these rejected shapes under different names without updating this issue with the new evidence.
Machine-Enforced Boundaries
Every durable "must not" needs a structural guard, not just reviewer memory.
If a rule cannot yet be enforced mechanically, the PR must say why and what later slice will make it enforceable.
Compatibility Outcomes And Rollback
Compatibility is an outcome protocol, not a boolean equality check.
The evaluator answers:
Required laws:
Rollback and kill-switch requirements:
Threat Model And Operating Budgets
The risky systems are cache reuse, skip transport, compatibility envelopes, trace fields, and runtime profiles.
Threats that must stay visible:
Required defences:
Hard correctness budgets:
Performance budgets start as measured gates and become numeric only after baseline data exists. Until then, PRs that enable cache reuse, skip transport, or runtime profile IO must report the hot-path counters they affect and the fallback when the budget is exceeded.
Lifecycle Model And Debugger Boundary
The lifecycle model may be expressed as a small dev/test reference state machine:
It exists to validate transitions and explain traces. It must not duplicate route semantics.
Illegal transitions include:
The NavigationTrace debugger is dev/test-only. It consumes events, decisions, traces, lifecycle state, and commit outcomes. It identifies whether a failure came from the planner, lifecycle gate, compatibility/cache proof, or commit transaction. It must not become a second planner, cache authority, or production observability system.
Core Ownership Model
1. AppElementsWire
Implementation name:
AppElementsWire.Owns:
Does not own:
The flat payload is how data travels. It is not how the router thinks.
2. RouterState And RouteSnapshot
Implementation names:
RouterState,RouteSnapshot.Route state owns visible continuity.
It must keep together the facts that change together:
State must not be reconstructed from wire keys.
3. Navigation Events
Implementation name:
NavigationEvent.Every router input becomes an event.
Minimum v0 event set:
Async result events carry causal proof:
A result is not allowed to commit because it finished. It may commit only if lifecycle still authorizes it.
4. NavigationPlanner
Implementation name:
navigationPlanner.plan().The planner owns route semantics, but only at the semantic planning and interpretation layer.
Shape:
Minimum v0 output:
Promote
SegmentOp,SlotOp, richer effect sets, and cache-specific decisions only when a real semantic slice needs them. Do not ship a fat reducer before one current path proves the spine.NavigationTracemust be reason-code based: compact codes plus structured fields that explain the decision without becoming another router object graph.The planner proposes. The lifecycle gate approves. The executor applies.
The planner must be boring at first. Boring is good here.
5. NavigationLifecycleController
Implementation name:
NavigationLifecycleController.Owns:
Operation lanes:
Terminal states:
Rules:
6. CommitDecision, ApprovedVisibleCommit, And BrowserDelta
Visible browser mutation must go through one approved commit transaction.
An approved visible commit is the only path that may mutate:
There must be exactly one place where
visibleCommitVersion++happens. Everything else is either candidate work, rejected work, hard navigation, or non-visible cache seeding.Hard navigation is terminal. It should not sit beside normal browser deltas.
transitionModeis part of commit authority. Shell code may execute the mode, but must not reinterpret the semantic decision.7. Route Facts Compiler
Implementation names:
RouteManifestBuilder,RouteManifest,StaticSegmentGraph.The compiler owns build-time topology and stable facts.
It compiles facts the current implementation already knows implicitly:
The compiler should not emit a full transition automaton. Runtime transition decisions remain in the planner.
Build-time classifications are hints. Runtime observations can always downgrade cacheability.
8. Render Observation Protocol
Implementation names:
RenderOutcome,RenderObservation.This is the missing bridge between pure planning and real execution.
A render must report what actually happened before cache write, reuse proof, streaming reveal, or visible commit approval can rely on it:
Render observations are not side-channel decisions. They re-enter the planner/lifecycle/cache model as explicit events or result metadata.
Absence of a recorded dynamic read is not enough for reuse unless the render scope was observed completely enough to prove that absence. A reusable output needs a scoped bill of health: what it did observe, what it did not observe, what boundary outcome it produced, and which artifact envelope it belongs to.
9. CacheVariant And Resource Dependencies
Implementation names:
CacheVariant,ResourceDependencyGraph.Cache identity answers:
Resource dependencies answer:
Start narrow. Do not attempt full cache algebra before the lifecycle spine exists.
Cache v1 should cover:
Mandatory cache dimension rules:
Mandatory cache budget rules:
If a route exceeds its variant ceiling, the fallback is private, uncacheable, or fresh render for the affected output. Measurement is not enough without an enforcement path.
10. Artifact Compatibility Envelope
Implementation name:
ArtifactCompatibilityEnvelope.Every payload or artifact that may later participate in cache reuse or skip transport must carry compatibility metadata before those systems rely on it:
Cache coherence and skip transport can remain disabled at first. The envelope should land early so old-client/new-server, new-client/old-server, rolling deploy, canary, and rollback behavior has a protocol instead of a boolean equality check.
11. Runtime Profile Boundary
Implementation names:
RuntimeProfile,CacheCoordinator,ArtifactStore.Runtime profiles execute approved work.
Cloudflare is the first production target, but the semantic core must not import raw runtime bindings.
Generic layers:
Hot-path rule:
Validity Rules
A reduction or runtime action is invalid if:
Navigation Kind Semantics
The planner must branch on explicit navigation kind, not infer it from payload shape.
Implementation Plan
This is not a fixed PR count. Split into small, reviewable PRs.
Governing sequence:
Layer 0: Keep The Landed Foundation
Do not redo the flat payload milestone.
Keep:
Layer 1: Lifecycle Spine First
Goal: make visible commit authority explicit while preserving behavior.
Deliverables:
Acceptance:
Layer 2: Fence AppElementsWire
Goal: stop semantic meaning from spreading through flat wire keys.
Deliverables:
No semantic promotion yet.
Layer 3: Minimal Route Manifest
Goal: compile the facts needed for the first promotions.
Deliverables:
Not yet:
Layer 4: NavigationPlanner v0
Goal: route one existing navigation path through the new ownership boundaries while preserving behavior.
Deliverables:
The planner must remain small and pure. Async facts re-enter as events. The invariant checker/debugger validates allowed transitions and trace shape; it must not duplicate route semantics.
Layer 5: Promote First Semantic Decisions
Promote one decision area per PR and delete the old writer in the same PR.
Recommended order:
Each PR must state:
Layer 6: Cache Coherence v1
Goal: make existing cache reuse safer before making it more aggressive.
Deliverables:
Do not enable broad skip transport in this layer.
Layer 7: Proof-Backed Skip Transport v1
Goal: reduce server work and bytes only where proof is cheap.
Initial eligible class:
Deliverables:
Layer 8: Runtime Profile v1
Goal: keep runtime execution behind typed contracts after the semantic spine exists.
Deliverables:
The semantic core stays binding-free.
Later Layers: Streaming And Activity
Separate follow-up specs required for:
Do not implement these before lifecycle authority, Route Manifest data, and cache coherence v1 exist.
Test Strategy
The most important tests are sequence tests and hostile timelines, not giant snapshots.
Generate apps with:
Generate event sequences like:
Required invariants:
Hot Path Budget
Track performance by route/cache class. Do not claim the architecture is faster merely because it is more formal.
At minimum, measure:
Operating limits:
Performance claims should be based on work avoided:
Success Criteria
This migration succeeds when:
Out Of Scope For The First Migration Stage
Do not add yet:
Related Issues / PRs / References
__VINEXT_CLASSstub patching ingenerateBundle#863: generated-code lifecycle seam should move toward owned typed contractsFinal Statement
This architecture is not trying to make a prettier flat map.
It is trying to make the router less magical without making it slower, more brittle, or impossible to ship.
The NavigationPlanner should be small. Lifecycle authority should land early. Cache proof should start narrow. Skip transport should prove that it saves work before it is trusted. Deployment compatibility must be designed for rolling edges. Runtime profiles execute approved work; they do not own route meaning.
The discipline remains:
The implementation discipline is just as important: