Oak Chain visual explainer · live model from oak-segment-consensus
Aeron Locality · Cross-Cluster Read Fabric

Each cluster is an Oak fiefdom. The network is a fabric, not one giant quorum.

Inside a cluster, writes are ordered by Aeron and applied deterministically on every validator. Across clusters, content moves through lazy read-only mounts over HTTP. The current runtime is a deliberate split: hard consensus locally, soft visibility globally.

Any node can accept ingress Followers can receive the HTTP request, but only the leader orders the replicated log entry.
Accepted is not committed The API returns 202 Accepted once the proposal is queued, before quorum commit.
Remote clusters stay read-only Foreign shard content is mounted into a composite read view and never enters the local Aeron state machine.
No cross-cluster Aeron Cluster-to-cluster visibility is lazy and mount-driven. Discovery and SSE hints can help, but they are not authoritative.

Write Flow Inside One Aeron Cluster

The runtime accepts a write on any validator, enforces shard ownership before queueing, persists the pending proposal locally, waits for payment proof, then releases to Aeron ingress. The leader orders the log. All nodes apply the same state transition.

Intra-cluster consensus

Consensus sequence

Client
HTTP API
Shard gate
Queue
Disk persistence
Payment verifier
Aeron ingress
Leader
Followers
Replica apply + SSE
Redirect short-circuit Foreign-wallet writes never enter the local queue or Aeron path.
Client → HTTP
Request arrives on any validator

The request can land on a follower or the leader. Entry node does not imply ordering authority.

Shard authority
Wallet resolves to another shard

The local node detects foreign ownership before queueing.

Response: 307 Temporary Redirect
Payload includes wallet prefix + redirect target
No local queue entry. No Aeron ingress.
Local-shard commit path This is the full path for a wallet the cluster actually owns.
1 · HTTP ingress
Validate wallet and request shape

The node parses the write, normalizes the wallet, and confirms local shard ownership.

2 · Queue admission
Queue proposal and return accepted

queueProposal(...) records the proposal and the API returns 202 Accepted with state=PENDING.

3 · Local durability
Persist pending proposal locally

The queue flushes to queued-proposals.bin. By default this is asynchronous, which keeps IO lighter but leaves a small pre-flush window.

4 · Verification
Wait for payment proof and release

The verifier confirms payment and moves the proposal into adaptive release / append-ready state.

5 · Aeron ingress
Offer the proposal into the cluster

The internal Aeron cluster client appends the write toward the leader through ingress.

6 · Quorum commit
Leader orders, followers replicate

The leader decides log order. Followers replicate through Raft. This is the real cluster durability boundary.

7 · Replica apply
All nodes apply and emit hints

onSessionMessage(...) runs on every node, merges into the authoritative local store, and can emit advisory SSE events.

Followers can receive the request, but not decide log order.
Accepted ≠ committed. Quorum commit happens later than API acceptance.
All replicas run the same deterministic apply callback.
1

Received

The HTTP node parses the request, validates the wallet, and checks local shard authority.

2

Queued

The proposal enters the unverified queue and becomes visible to local queue management.

3

Flushed

The pending proposal is persisted locally. Defaults are async: 250ms or 100 items.

4

Verified

The payment proof resolves and the proposal moves into adaptive release / ready-to-append flow.

5

Ingressed

The node offers the write to Aeron ingress using the internal cluster client.

6

Committed

The leader orders the message and followers replicate it through Raft.

7

Applied

All nodes run the deterministic handler and merge into their own local authoritative replica.

Distributed Content Fabric: Clusters of Clusters

Each cluster owns a prefix range and its own writable repository. Remote clusters are mounted into a composite read view over HTTP. The local validator exposes one authoritative write path and one broader read surface. Cross-cluster visibility is lazy by design.

Cross-cluster read fabric

Local Aeron only

Each fiefdom runs its own leader + followers + local writable Oak repository.

Remote mounts stay outside quorum

Foreign shard content is exposed through LazyHttpNodeStore and never becomes part of local consensus state.

Authority is prefix-scoped

Current sharding uses wallet L1 prefixes under the real path shape /oak-chain/XX/YY/ZZ/0xWALLET.

Discovery can stay advisory

Signed announcements and SSE hints can improve mount awareness later without becoming a second consensus plane.

Three-plane model

Plane 1 · Local consensus plane
Each cluster owns one writable repository. Aeron stays inside the shard boundary and never spans across clusters.
Cluster A Prefixes 00-7f
Leader
Follower
Follower
Validator APIs
Composite read view
Authoritative local Oak repo
Follower
Follower
Replica apply
Local writes for A-owned wallets enter this Aeron cluster and commit here.
Cluster B Prefixes 80-bf
Leader
Follower
Follower
Validator APIs
Composite read view
Authoritative local Oak repo
Follower
Follower
Replica apply
B is sovereign for its own wallets. Foreign writes redirect before local queueing.
Cluster C Prefixes c0-ff
Leader
Follower
Follower
Validator APIs
Composite read view
Authoritative local Oak repo
Follower
Follower
Replica apply
The shape scales by adding more fiefdoms, not by collapsing them into one global quorum.
Plane 2 · Cross-cluster read fabric
The composite read view mounts foreign shard content lazily over HTTP. Remote data stays visible, but it never becomes writable local state.
Cluster A read view Lazy read-only mounts
A composite view Mount Cluster B repo for prefixes 80-bf
A composite view Mount Cluster C repo for prefixes c0-ff
Cluster B read view Lazy read-only mounts
B composite view Mount Cluster A repo for prefixes 00-7f
B composite view Mount Cluster C repo for prefixes c0-ff
Cluster C read view Lazy read-only mounts
C composite view Mount Cluster A repo for prefixes 00-7f
C composite view Mount Cluster B repo for prefixes 80-bf
Plane 3 · Optional discovery / hint plane
Signed announcements and SSE hints can refresh route registries, prewarm remote reads, and invalidate cache, but they do not own correctness.
Signed cluster announcements Control plane
Cluster wallet Owned prefixes Validator endpoints TTL / freshness
SSE change hints Advisory only
Content changed Prefix activity Mount invalidate
Effect on the fabric Never quorum
Refresh route registry Prewarm hot remote paths Graceful cache invalidation
Invariant 1
Writes never cross the local consensus boundary. A wallet that belongs to another shard gets redirected before entering the local queue or Aeron ingress.
Invariant 2
The composite view is for reads, not writes. The validator can browse remote content through mounted prefixes, but merge operations still target the local authoritative store only.
Invariant 3
Remote failure should degrade gracefully. If a foreign cluster is unavailable, the local shard still accepts local writes and serves local content.

Code Anchors

These are the live seams backing the diagrams above. The page is deliberately tied to the current codebase, not to older abstract ADR language.

Live references

Ingress + acceptance

WriteProposalHandler.java:179-186, 791-839

Shard ownership is enforced before queueing. Local requests queue and return 202 Accepted with state=PENDING.

Wrong-shard redirect

ShardWriteAuthorityEnforcer.java:37-79

Remote wallets get 307 Temporary Redirect plus redirect metadata. Unclaimed prefixes fail closed.

Pending proposal persistence

ProposalQueueManagerOptimized.java:1695-1711, 1438-1488

The queue records the proposal, enqueues it, then persists. Persistence is async by default.

  • ProposalQueueTuning.java:29-31 sets flushIntervalMs=250, flushBatch=100.
  • ProposalPersistenceStore.java:79-92 saves via temp file + atomic move.

Verification and release

ProposalQueueManagerOptimized.java:823-840, 2088-2145, 2288-2360

Verified proposals move into adaptive release and eventually call the Aeron append callback.

Aeron ordering and replica apply

AeronConsensusEngine.java:780-816, 1267-1368

The internal cluster client offers the write through ingress. Aeron replicates it and onSessionMessage runs on every node.

Bridge from queue to ingress

ConsensusServicesInitializer.java:197-252

The raft append callback forwards verified proposals into Aeron without exposing direct local mutation paths.

Write store vs read view split

ServerContext.java:50-78

The server context keeps a composite/read-view store and a separate authoritative local store.

  • ConsensusApiHandler.java:55-68 binds write/delete services to the authoritative store.
  • ServerStorageRuntime.java:24-66 carries both stores through startup.

Remote mount fabric

ShardingRuntimeConfig.java:32-147

Wallet L1 prefixes resolve local ownership and expand into remote read-only mounts.

  • ValidatorReadViewBuilder.java:35-117 builds the composite read view.
  • LazyHttpNodeStore.java:45-248 provides lazy, read-only, fail-soft HTTP mounts.
  • WalletPathUtil.java:91-155 defines the real /oak-chain/XX/YY/ZZ/0xWALLET namespace.