Fabric-X Architecture
A Fabric-X network is built from two kinds of node groups — orderer groups and committers — running alongside one or more token-sdk-x applications. This page describes the components, their responsibilities, how data flows between them, and how ChainLaunch exposes them on the host.
Component map
A single party runs 10 containers managed by 2 ChainLaunch nodes:
Orderer group components
| Component | Role |
|---|---|
| Router | Entrypoint for client broadcasts (namespace creation, token-sdk-x transactions). Fronts the orderer group on a single port. |
| Batcher | Groups incoming transactions into batches for consensus. |
| Consenter | Runs the Arma consensus protocol across parties to establish total order. |
| Assembler | Assembles ordered batches into blocks, serves blocks to committers. |
Committer components
| Component | Role |
|---|---|
| Sidecar | Pulls blocks from assemblers, feeds the commit pipeline. |
| Coordinator | Orchestrates validation across validator/verifier instances. |
| Validator | Validates transaction structure and policy. |
| Verifier | Verifies cryptographic signatures. |
| Query-service | Exposes block, transaction, and state queries (used by ChainLaunch's explorer). |
| Postgres | Persistent state backend — one database per committer, tables per namespace. |
Data flow
- A client (typically a token-sdk-x application) broadcasts a transaction to any party's router.
- The router hands the transaction to its batcher.
- Batchers across all parties participate in Arma consensus via the consenters to establish the canonical order of batches.
- Each party's assembler assembles ordered batches into blocks.
- Each party's sidecar pulls blocks from (any) assembler.
- The coordinator drives validation and verification; the validator checks structure and the verifier checks signatures.
- Verified transactions are committed to postgres, partitioned by namespace.
- The query-service serves reads over committed state.
Multi-party topology
Fabric-X is designed for multiple parties (organizations). Each party runs its own orderer group and committer; the orderer groups talk to each other for Arma consensus, and each committer pulls blocks from the assemblers it has endpoints for.
A 4-party network:
Parties are 1-indexed; the maximum partyId is 10 per network.
Port layout
Each orderer group and committer exposes its components on the host. ChainLaunch auto-allocates free ports by default, or you can pin them explicitly — essential when running multiple networks side-by-side.
Orderer group — 4 ports
| Component | Default offset | Purpose |
|---|---|---|
| Router | +0 | Client broadcast entrypoint |
| Batcher | +1 | Internal — batch formation |
| Consenter | +2 | Internal — Arma consensus |
| Assembler | +3 | Block service for committers |
Committer — 6 ports
| Component | Default offset | Purpose |
|---|---|---|
| Sidecar | +0 | Pulls blocks from assemblers |
| Coordinator | +1 | Internal |
| Validator | +2 | Internal |
| Verifier | +3 | Internal |
| Query-service | +4 | gRPC queries (used by explorer) |
| Postgres | +5 | SQL port (state backend) |
Recommended port-band scheme
Reserve a 100-port band per network to avoid collisions:
17000..17099— network A17100..17199— network B- …and so on.
Within a band, allocate 20 ports per party (10 for the orderer group, 10 for the committer, with headroom). See the create-network guide for a worked example.
Channels and namespaces
Fabric-X networks use a single channel — always "arma" as of this writing. Logical
partitioning inside the channel is done via namespaces. Each namespace is backed
by a dedicated postgres table (ns_<name>) per committer.
See Namespaces for details.
Crypto material
Each orderer group and each committer gets its own:
- Sign key and certificate (signing identity, signed by the org's CA).
- TLS key and certificate (for mutual TLS between components).
- CA certificate references to the org's signing CA and TLS CA.
ChainLaunch generates all of the above automatically during node creation. The genesis block bundles all parties' CA certs so orderer groups can authenticate each other at Arma-consensus boot.
Relationship to token-sdk-x
token-sdk-x is not managed by ChainLaunch. It's a separate application-layer component that:
- Holds user identities and signs transactions.
- Broadcasts signed transactions to any party's router.
- Reads state from the query-service.
From ChainLaunch's perspective, token-sdk-x is just a client. Everything ChainLaunch surfaces — orderer groups, committers, namespaces, the explorer — is infrastructure that token-sdk-x applications consume.
Lifecycle model — "create then join"
Fabric-X nodes follow a two-stage lifecycle, unlike Fabric and Besu which start on create:
- Create — ChainLaunch generates crypto material and writes config. Containers
are not started (status:
STOPPED). - Join — after the network is created and the genesis block generated,
POST /networks/fabricx/{id}/nodes/{nodeId}/joinwrites the genesis into the node's bind mount and starts the containers.
This exists because orderer groups and committers need the network-wide genesis block before they can boot.
Local development mode
Fabric-X networks expose a per-network localDev flag (passed in the create-network
request body). When enabled, ChainLaunch rewrites the external addresses baked
into the genesis block and used for host-originated dials so the network works on
Docker Desktop (macOS/Windows):
| Context | Default | localDev: true |
|---|---|---|
| External IP in the genesis block | Party's configured external IP | host.docker.internal |
| Host → router (namespace creation) | External IP | 127.0.0.1 |
| Host → query-service (explorer) | External IP | 127.0.0.1 |
| Committer → its own postgres | As configured | host.docker.internal when PostgresHost is loopback |
| TLS cert SANs | Always include localhost, 127.0.0.1, and host.docker.internal | (unchanged) |
The flag is per-network, so you can mix localDev: true networks with regular
networks on the same ChainLaunch instance. For backward compatibility, setting
CHAINLAUNCH_FABRICX_LOCAL_DEV=true on the server process applies the same
behavior globally to every FabricX network; the per-network flag takes
precedence when both are set.
On Linux with a native Docker daemon, leave localDev off — the external IP is
directly reachable from containers, so no rewriting is needed.