How Oak Chain Works
Interactive visual diagrams showing the architecture and data flows. Click nodes to see details, or hit Play Animation to watch data flow through the system.
Architecture Overview
The complete system: Authors create content (via AEM Connector or SDK), MetaMask handles payments, validators reach consensus, IPFS stores binaries, and Edge Delivery serves content globally.
Content Write Flow
When an author creates or modifies content in Sling, the network first resolves which cluster owns that wallet namespace. If the request reaches the wrong cluster, it is redirected before queueing. The authoritative cluster then accepts the signed proposal, replicates it across its own Aeron validators, and commits it to the local Oak segment store.
See the full plate
For the richer sequence boundary and the companion clusters-of-clusters model, open Write Flow + Content Fabric.
The Steps
- Author creates content via AEM Connector or Oak Chain SDK
- Wallet signs the content change with secp256k1 key
- Authority check resolves the owning cluster for that wallet namespace
- Foreign cluster redirects if the request hit a non-owning cluster
- Raft leader in the authoritative cluster receives and validates the proposal
- Log replication sends entry to local followers
- Followers acknowledge receipt
- Commit happens when the local majority confirms
- Oak Store persists content to TAR segments
Payment Flow
Users pay for content writes via a wallet such as MetaMask. The payment goes to the ValidatorPaymentV3_2 contract on Ethereum. That contract emits a ProposalPaid event. Validators monitor those events and authorize writes for the paying wallet address.
Payment Classes
The payment contract exposes three classes. Those classes determine price, while release into consensus is handled by Oak's adaptive packing buffer.
| Class | Current V1 Price | Typical Fit |
|---|---|---|
| Priority | 0.01 ETH or 32.50 USDC | Premium or policy-sensitive flows |
| Express | 0.002 ETH or 6.50 USDC | General publishing |
| Standard | 0.001 ETH or 3.25 USDC | Cost-sensitive bulk and archive flows |
IPFS Binary Storage
Binary assets (images, PDFs, videos) are stored on IPFS, not in Oak segments. The author's local IPFS node pins the content and generates a CID (content-addressed hash). Oak stores only the CID reference, enabling global retrieval via any IPFS gateway.
Why IPFS for Binaries?
- Content-addressed: Same content = same CID, automatic deduplication
- Author-owned storage: Validators don't store binaries, just CID references
- Global availability: Any IPFS gateway can serve the content
- Immutable: CID is a cryptographic hash of the content
Raft Consensus State Machine
Validators use Aeron Raft for consensus. Nodes start as Followers, become Candidates on election timeout, and Leaders if they win majority votes. Leaders send heartbeats to maintain authority.
State Transitions
| From | To | Trigger |
|---|---|---|
| Follower | Candidate | Election timeout (no heartbeat) |
| Candidate | Leader | Receives majority votes |
| Candidate | Follower | Discovers higher term |
| Leader | Follower | Discovers higher term |
Timing
- Heartbeat interval: 50ms
- Election timeout: 150-300ms (randomized)
- Failover time: Sub-second
The Complete Picture
┌─────────────────────────────────────────────────────────────┐
│ AUTHORING LAYER │
│ AEM Connector / Oak Chain SDK + MetaMask Wallet │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ CONSENSUS LAYER │
│ Aeron Raft Cluster (Leader + Followers) │
│ • Signed write proposals │
│ • Payment verification via Ethereum │
│ • Deterministic state machine │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ STORAGE LAYER │
│ Oak Segment Store (TAR files) + IPFS (binaries) │
│ • Immutable segments │
│ • Append-only journal │
│ • Content-addressed binaries │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ DELIVERY LAYER │
│ Edge Delivery Services (CDN) │
│ • 100 Lighthouse score │
│ • Global distribution │
│ • Real-time streaming │
└─────────────────────────────────────────────────────────────┘Key Principles
- Wallet = Identity: Every participant has an Ethereum wallet
- Signed Proposals: Every write is cryptographically signed
- Payment = Authorization: No payment, no write
- Consensus = Truth: Majority agreement determines state
- Immutable Storage: Once written, content is permanent
Why These Principles?
We didn't choose these arbitrarily. We broke down to fundamentals:
- What is identity? → Cryptographic proof (wallet), not username/password
- What is authorization? → Economic proof (payment), not role-based access
- What is truth? → Consensus (majority), not single-authority
- What is persistence? → Immutable (append-only), not mutable databases
We reasoned up from these atoms. The result: a system that's fundamentally different from traditional CMS, but built on proven primitives (Ethereum, Oak, Raft).
Cluster Topology
The network scales horizontally through multiple clusters. Each cluster is a sovereign Aeron-backed fiefdom over a portion of the wallet namespace: one local writable repository, plus lazy read-only views of remote clusters.
Each cluster is sovereign over its own shard, but mounts remote shards read-only. Consensus stays local.
| Cluster A | Cluster B | |
|---|---|---|
| Writes | /oak-chain/00-7F/* | /oak-chain/80-FF/* |
| Reads | Everything | Everything |
- Wallet 0x74... → prefix
74/...falls in Cluster A's owned range → Cluster A is authoritative - Wallet 0xAB... → prefix
ab/...falls in Cluster B's owned range → Cluster B is authoritative
Authors write to their authoritative cluster. All clusters can read all content.
Three Operational Planes
- Intra-cluster consensus: Aeron/Raft governs the local writable repository only.
- Cross-cluster reads: remote content is mounted lazily and read-only over HTTP segment transfer.
- Discovery: cluster announcements and route hints are a separate control plane, not part of consensus.
For the wider three-plane visual, open Write Flow + Content Fabric.
Scaling
The architecture scales horizontally by adding clusters:
| Clusters | Shards/Cluster | Wallets Supported | Read Mounts/Cluster |
|---|---|---|---|
| 2 | 128 shards | ~8M wallets | 1 mount |
| 20 | 12-13 shards | ~80M wallets | 19 mounts |
| 50 | 5 shards | ~200M wallets | 49 mounts |
| 100 | 2-3 shards | ~400M wallets | 99 mounts |
| 1,000 | 16 shards* | ~4B wallets | 999 mounts |
*At 1,000 clusters, shards are subdivided further (L4 sharding).
Trade-offs at Scale
| Scale | Writes | Reads | Sync Overhead |
|---|---|---|---|
| Small (2-20) | Fast | Fast | Low |
| Medium (50-100) | Fast | Fast | Moderate |
| Large (1,000+) | Fast | Fast | High (optimized via gossip) |
Key insight: Write throughput scales linearly (each cluster handles its shard independently). Read latency stays constant (local mount). Sync overhead grows with cluster count but is optimized via:
- Lazy segment fetching (pull on demand)
- Announcement/gossip-based discovery and mount refresh
- Hierarchical sync topology
Deep Dive: Segment Store GC
The append-only segment store accumulates garbage over time. Learn how Oak's generational garbage collection reclaims disk space while maintaining data integrity.