Create a Fabric-X Network
This guide walks you through creating a Fabric-X network from scratch using the ChainLaunch API. It mirrors what the quickstart wizard does automatically, but gives you full control over organizations, port assignments, and multi-network layouts.
Use this guide when:
- You need to pin ports (multiple Fabric-X networks on one host).
- You want a party count other than 4.
- You're integrating with existing organizations.
- You're provisioning from CI, a script, or Terraform.
Prerequisites
- ChainLaunch server running (default:
http://localhost:8100). - Docker Desktop or Docker Engine running and reachable.
curlandjqinstalled.- Admin credentials.
macOS / Windows — local development mode
If you're running ChainLaunch on macOS or Windows with Docker Desktop, set
localDev: true when creating the network (see Step 4).
This swaps the party external IP for host.docker.internal in the genesis block
so containers can reach each other, and routes host-originated dials (namespace
creation, explorer) through 127.0.0.1.
Alternatively, set this env var on the ChainLaunch server process to apply the same behavior globally to every FabricX network without touching the request body:
export CHAINLAUNCH_FABRICX_LOCAL_DEV=true
Per-network localDev takes precedence; the env var is the global fallback and
stays for backward compatibility.
On Linux, leave both unset — the external IP is directly reachable from containers and nothing needs rewriting.
Shell helpers
export CL="http://localhost:8100"
export AUTH="admin:admin123"
Topology reference
A single N-party Fabric-X network needs:
| Resource | Count | Containers per unit | Total (N=4) |
|---|---|---|---|
| Organization + signing CA + TLS CA | N | — | 4 orgs |
| Orderer group | N | router, batcher, consenter, assembler | 16 |
| Committer | N | sidecar, coordinator, validator, verifier, query-service, postgres | 24 |
| Total containers | 40 |
Maximum supported partyId per network is 10.
Port allocation strategy
Each orderer group and committer exposes its components on the host. You have two options:
Auto-allocate (single network)
Omit port fields from the request body (or send 0). ChainLaunch picks from its
free-port pool. Fine for a single Fabric-X network per host.
Pin ports (multi-network or production)
Reserve a 100-port band per network, with a 20-port slot per party inside. Example scheme for two networks, 4 parties each:
| Network | Party | Router | Batcher | Consenter | Assembler | Sidecar | Coord. | Validator | Verifier | Query | Postgres |
|---|---|---|---|---|---|---|---|---|---|---|---|
| A | 1 | 17010 | 17011 | 17012 | 17013 | 17020 | 17021 | 17022 | 17023 | 17024 | 17025 |
| A | 2 | 17030 | 17031 | 17032 | 17033 | 17040 | 17041 | 17042 | 17043 | 17044 | 17045 |
| A | 3 | 17050 | 17051 | 17052 | 17053 | 17060 | 17061 | 17062 | 17063 | 17064 | 17065 |
| A | 4 | 17070 | 17071 | 17072 | 17073 | 17080 | 17081 | 17082 | 17083 | 17084 | 17085 |
| B | 1 | 17110 | 17111 | 17112 | 17113 | 17120 | 17121 | 17122 | 17123 | 17124 | 17125 |
| B | 2 | 17130 | 17131 | 17132 | 17133 | 17140 | 17141 | 17142 | 17143 | 17144 | 17145 |
| B | 3 | 17150 | 17151 | 17152 | 17153 | 17160 | 17161 | 17162 | 17163 | 17164 | 17165 |
| B | 4 | 17170 | 17171 | 17172 | 17173 | 17180 | 17181 | 17182 | 17183 | 17184 | 17185 |
Rule of thumb:
- 100-port band per network:
17000 + 100*networkIndex - 20-port slot per party within the band:
band + 20*(partyIndex - 1) - First 10 ports of a slot → orderer group; next 10 → committer
The only hard requirement is no two components share a host port.
Reusing organizations
Nothing forces a new org per network. If Party1MSP already exists from network A,
you can pass the same organization ID when creating network B. The orderer groups
and committers are per-network; the organization and its CAs are not.
Step 1 — Create organizations
Each org gets a signing CA and a TLS CA automatically.
for p in 1 2 3 4; do
curl -s -u "$AUTH" -X POST "$CL/api/v1/organizations" \
-H "Content-Type: application/json" \
-d "{
\"mspId\": \"Party${p}MSP\",
\"description\": \"Fabric-X Party ${p}\",
\"providerId\": 1
}" | jq '.id, .mspId'
done
Capture each org ID — we'll use $ORG1..$ORG4 below.
export ORG1=$(curl -s -u $AUTH "$CL/api/v1/organizations" \
| jq '.items[] | select(.mspId=="Party1MSP") | .id')
# repeat for ORG2..ORG4
Step 2 — Create orderer groups
One per party. This is the 4-container unit. Containers do not start yet.
curl -s -u "$AUTH" -X POST "$CL/api/v1/nodes" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"netA-orderer-p1\",
\"nodeType\": \"FABRICX_ORDERER_GROUP\",
\"fabricxOrdererGroup\": {
\"name\": \"netA-orderer-p1\",
\"organizationId\": $ORG1,
\"mspId\": \"Party1MSP\",
\"partyId\": 1,
\"externalIp\": \"127.0.0.1\",
\"version\": \"latest\",
\"consenterType\": \"pbft\",
\"routerPort\": 17010,
\"batcherPort\": 17011,
\"consenterPort\": 17012,
\"assemblerPort\": 17013
}
}"
Repeat for parties 2, 3, 4 with their ports from the table above.
Validation rules:
partyIdmust be between 1 and 10 and unique per network.mspIdmust match the organization's MSP ID.consenterType:"pbft"(default) or"raft".
Step 3 — Create committers
One per party. The 6-container unit. Again, containers do not start yet.
curl -s -u "$AUTH" -X POST "$CL/api/v1/nodes" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"netA-committer-p1\",
\"nodeType\": \"FABRICX_COMMITTER\",
\"fabricxCommitter\": {
\"name\": \"netA-committer-p1\",
\"organizationId\": $ORG1,
\"mspId\": \"Party1MSP\",
\"externalIp\": \"127.0.0.1\",
\"version\": \"latest\",
\"sidecarPort\": 17020,
\"coordinatorPort\": 17021,
\"validatorPort\": 17022,
\"verifierPort\": 17023,
\"queryServicePort\": 17024,
\"postgresPort\": 17025,
\"postgresHost\": \"host.docker.internal\",
\"postgresDb\": \"netA_p1\",
\"postgresUser\": \"fabricx\",
\"postgresPassword\": \"fabricx\",
\"channelId\": \"arma\",
\"ordererEndpoints\": [
\"host.docker.internal:17013\",
\"host.docker.internal:17033\",
\"host.docker.internal:17053\",
\"host.docker.internal:17073\"
]
}
}"
Notes:
postgresHost: host.docker.internal+ a distinctpostgresPortper committer gives each party its own postgres container.postgresDbmust be unique per committer if they share a postgres instance (by default they don't).ordererEndpointslists assembler ports, not router ports. The sidecar pulls blocks from assemblers.- The
channelIdis always"arma"as of this writing.
Repeat for parties 2, 3, 4.
Step 4 — Create the network
This generates the Arma genesis block from the party list and stores it on the network row. No containers start yet.
curl -s -u "$AUTH" -X POST "$CL/api/v1/networks/fabricx" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"netA\",
\"description\": \"Fabric-X network A (ports 17010-17099)\",
\"config\": {
\"channelName\": \"arma\",
\"localDev\": false,
\"organizations\": [
{\"id\": $ORG1, \"ordererNodeId\": $OG1_ID, \"committerNodeId\": $CM1_ID},
{\"id\": $ORG2, \"ordererNodeId\": $OG2_ID, \"committerNodeId\": $CM2_ID},
{\"id\": $ORG3, \"ordererNodeId\": $OG3_ID, \"committerNodeId\": $CM3_ID},
{\"id\": $ORG4, \"ordererNodeId\": $OG4_ID, \"committerNodeId\": $CM4_ID}
]
}
}" | jq '.id'
Request body fields:
| Field | Meaning |
|---|---|
config.channelName | Must be "arma" — the only channel ID FabricX supports today. |
config.localDev | Set to true on macOS/Windows with Docker Desktop. See the local development note. Defaults to false. |
config.organizations[].id | Organization ID from Step 1. |
config.organizations[].ordererNodeId | Orderer-group node ID from Step 3 (or use ordererNodeGroupId for the ADR-0001 path). |
config.organizations[].committerNodeId | Committer node ID from Step 3. Optional. |
Capture the returned network ID as $NETA_ID.
Step 5 — Join every node
This is the step that actually starts the containers. Each join writes the
genesis block into the node's bind mount and calls StartNode.
NODE_IDS=$(curl -s -u "$AUTH" "$CL/api/v1/nodes?platform=FABRICX" \
| jq -r '.items[] | select(.name | startswith("netA-")) | .id')
for nid in $NODE_IDS; do
echo "Joining node $nid..."
curl -s -u "$AUTH" --max-time 240 \
-X POST "$CL/api/v1/networks/fabricx/$NETA_ID/nodes/$nid/join" \
| jq '.status'
done
Why --max-time 240: on macOS Docker Desktop the first container start under a
cold bind-mount cache can take 60–120 seconds. After the first component warms the
cache, subsequent joins succeed in seconds. Retry individually on timeout.
Verify all 8 nodes are running:
curl -s -u "$AUTH" "$CL/api/v1/nodes?platform=FABRICX" \
| jq '.items[] | {id, name, status}'
All should show "status": "RUNNING".
Step 6 — Create a namespace
curl -s -u "$AUTH" -X POST "$CL/api/v1/networks/fabricx/$NETA_ID/namespaces" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"token\",
\"submitterOrgId\": $ORG1,
\"waitForFinality\": true
}"
Expected response:
{ "id": 17, "status": "committed", "txId": "fa5670f38f45..." }
See Namespaces for the namespace lifecycle and postgres layout.
Running a second network on the same host
Repeat steps 2–6 with the netB port band and name: "netB". All resources are
independent; only ports must be unique.
Reusing organizations across networks
If Party1MSP's org and CAs already exist from network A, just reuse $ORG1..$ORG4
in network B's orderer group, committer, and network requests. You don't need new
orgs.
Container name collisions
Container names derive from the node's name field (netA-orderer-p1-router,
etc.). As long as network A and network B node names differ (netA-* vs netB-*),
Docker runs both sets side-by-side.
Bind-mount directory collisions
Bind mounts are keyed by node name under
chaindeploy/data/fabricx-orderers/<node-name>/ and
chaindeploy/data/fabricx-committers/<node-name>/. Distinct node names → distinct
directories.
Tearing down a network
The built-in delete doesn't purge Docker state. You need to clean up manually:
# 1. Delete via API (drops DB rows).
curl -s -u "$AUTH" -X DELETE "$CL/api/v1/networks/fabricx/$NETA_ID"
for nid in $NODE_IDS; do
curl -s -u "$AUTH" -X DELETE "$CL/api/v1/nodes/$nid"
done
# 2. Remove containers.
docker ps -a --filter name=netA- -q | xargs -r docker rm -f
# 3. Remove bind mounts.
rm -rf chaindeploy/data/fabricx-orderers/netA-*
rm -rf chaindeploy/data/fabricx-committers/netA-*
# 4. Remove volumes if any.
docker volume prune -f
Skipping steps 2 or 3 will cause ABORTED_SIGNATURE_INVALID on the next rebuild,
because committers resume from a stale ledger position against a freshly-regenerated
genesis block.
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
dial ... context deadline exceeded on namespace create | Network was created without localDev: true (macOS/Windows) and no global CHAINLAUNCH_FABRICX_LOCAL_DEV | Recreate the network with "localDev": true in the config, or restart the server with the env var set. |
invalid mount config ... bind source path does not exist | Docker Desktop cold cache | Retry --max-time 240; first component warms the cache. |
ABORTED_SIGNATURE_INVALID | Stale ledger from prior run | Manual teardown (above). |
| TLS handshake failure despite server up | deployment_config.tlsCaCert drifted from the org's current CA | Recreate the affected node so it picks up the current CA. |
Port already in use | Another network or service on the same host | Pick a different port band. |
Glossary
| Term | Meaning |
|---|---|
| Party | A participating organization. partyId is 1-indexed, max 10. |
| Orderer group | Router + batcher + consenter + assembler — one per party. |
| Committer | Sidecar + coordinator + validator + verifier + query-service + postgres — one per party. |
| Assembler | The orderer-group component that committers pull blocks from. |
| Router | The orderer-group entrypoint for client broadcasts. |
| Channel | Always "arma" for Fabric-X as of this writing. |
| Namespace | A logical partition within a channel, maps to postgres table ns_<name>. |