Skip to main content

Create a Fabric-X Network

This guide walks you through creating a Fabric-X network from scratch using the ChainLaunch API. It mirrors what the quickstart wizard does automatically, but gives you full control over organizations, port assignments, and multi-network layouts.

Use this guide when:

  • You need to pin ports (multiple Fabric-X networks on one host).
  • You want a party count other than 4.
  • You're integrating with existing organizations.
  • You're provisioning from CI, a script, or Terraform.

Prerequisites

  • ChainLaunch server running (default: http://localhost:8100).
  • Docker Desktop or Docker Engine running and reachable.
  • curl and jq installed.
  • Admin credentials.

macOS / Windows — local development mode

If you're running ChainLaunch on macOS or Windows with Docker Desktop, set localDev: true when creating the network (see Step 4). This swaps the party external IP for host.docker.internal in the genesis block so containers can reach each other, and routes host-originated dials (namespace creation, explorer) through 127.0.0.1.

Alternatively, set this env var on the ChainLaunch server process to apply the same behavior globally to every FabricX network without touching the request body:

export CHAINLAUNCH_FABRICX_LOCAL_DEV=true

Per-network localDev takes precedence; the env var is the global fallback and stays for backward compatibility.

On Linux, leave both unset — the external IP is directly reachable from containers and nothing needs rewriting.

Shell helpers

export CL="http://localhost:8100"
export AUTH="admin:admin123"

Topology reference

A single N-party Fabric-X network needs:

ResourceCountContainers per unitTotal (N=4)
Organization + signing CA + TLS CAN4 orgs
Orderer groupNrouter, batcher, consenter, assembler16
CommitterNsidecar, coordinator, validator, verifier, query-service, postgres24
Total containers40

Maximum supported partyId per network is 10.

Port allocation strategy

Each orderer group and committer exposes its components on the host. You have two options:

Auto-allocate (single network)

Omit port fields from the request body (or send 0). ChainLaunch picks from its free-port pool. Fine for a single Fabric-X network per host.

Pin ports (multi-network or production)

Reserve a 100-port band per network, with a 20-port slot per party inside. Example scheme for two networks, 4 parties each:

NetworkPartyRouterBatcherConsenterAssemblerSidecarCoord.ValidatorVerifierQueryPostgres
A117010170111701217013170201702117022170231702417025
A217030170311703217033170401704117042170431704417045
A317050170511705217053170601706117062170631706417065
A417070170711707217073170801708117082170831708417085
B117110171111711217113171201712117122171231712417125
B217130171311713217133171401714117142171431714417145
B317150171511715217153171601716117162171631716417165
B417170171711717217173171801718117182171831718417185

Rule of thumb:

  • 100-port band per network: 17000 + 100*networkIndex
  • 20-port slot per party within the band: band + 20*(partyIndex - 1)
  • First 10 ports of a slot → orderer group; next 10 → committer

The only hard requirement is no two components share a host port.

Reusing organizations

Nothing forces a new org per network. If Party1MSP already exists from network A, you can pass the same organization ID when creating network B. The orderer groups and committers are per-network; the organization and its CAs are not.

Step 1 — Create organizations

Each org gets a signing CA and a TLS CA automatically.

for p in 1 2 3 4; do
curl -s -u "$AUTH" -X POST "$CL/api/v1/organizations" \
-H "Content-Type: application/json" \
-d "{
\"mspId\": \"Party${p}MSP\",
\"description\": \"Fabric-X Party ${p}\",
\"providerId\": 1
}" | jq '.id, .mspId'
done

Capture each org ID — we'll use $ORG1..$ORG4 below.

export ORG1=$(curl -s -u $AUTH "$CL/api/v1/organizations" \
| jq '.items[] | select(.mspId=="Party1MSP") | .id')
# repeat for ORG2..ORG4

Step 2 — Create orderer groups

One per party. This is the 4-container unit. Containers do not start yet.

curl -s -u "$AUTH" -X POST "$CL/api/v1/nodes" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"netA-orderer-p1\",
\"nodeType\": \"FABRICX_ORDERER_GROUP\",
\"fabricxOrdererGroup\": {
\"name\": \"netA-orderer-p1\",
\"organizationId\": $ORG1,
\"mspId\": \"Party1MSP\",
\"partyId\": 1,
\"externalIp\": \"127.0.0.1\",
\"version\": \"latest\",
\"consenterType\": \"pbft\",
\"routerPort\": 17010,
\"batcherPort\": 17011,
\"consenterPort\": 17012,
\"assemblerPort\": 17013
}
}"

Repeat for parties 2, 3, 4 with their ports from the table above.

Validation rules:

  • partyId must be between 1 and 10 and unique per network.
  • mspId must match the organization's MSP ID.
  • consenterType: "pbft" (default) or "raft".

Step 3 — Create committers

One per party. The 6-container unit. Again, containers do not start yet.

curl -s -u "$AUTH" -X POST "$CL/api/v1/nodes" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"netA-committer-p1\",
\"nodeType\": \"FABRICX_COMMITTER\",
\"fabricxCommitter\": {
\"name\": \"netA-committer-p1\",
\"organizationId\": $ORG1,
\"mspId\": \"Party1MSP\",
\"externalIp\": \"127.0.0.1\",
\"version\": \"latest\",
\"sidecarPort\": 17020,
\"coordinatorPort\": 17021,
\"validatorPort\": 17022,
\"verifierPort\": 17023,
\"queryServicePort\": 17024,
\"postgresPort\": 17025,
\"postgresHost\": \"host.docker.internal\",
\"postgresDb\": \"netA_p1\",
\"postgresUser\": \"fabricx\",
\"postgresPassword\": \"fabricx\",
\"channelId\": \"arma\",
\"ordererEndpoints\": [
\"host.docker.internal:17013\",
\"host.docker.internal:17033\",
\"host.docker.internal:17053\",
\"host.docker.internal:17073\"
]
}
}"

Notes:

  • postgresHost: host.docker.internal + a distinct postgresPort per committer gives each party its own postgres container.
  • postgresDb must be unique per committer if they share a postgres instance (by default they don't).
  • ordererEndpoints lists assembler ports, not router ports. The sidecar pulls blocks from assemblers.
  • The channelId is always "arma" as of this writing.

Repeat for parties 2, 3, 4.

Step 4 — Create the network

This generates the Arma genesis block from the party list and stores it on the network row. No containers start yet.

curl -s -u "$AUTH" -X POST "$CL/api/v1/networks/fabricx" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"netA\",
\"description\": \"Fabric-X network A (ports 17010-17099)\",
\"config\": {
\"channelName\": \"arma\",
\"localDev\": false,
\"organizations\": [
{\"id\": $ORG1, \"ordererNodeId\": $OG1_ID, \"committerNodeId\": $CM1_ID},
{\"id\": $ORG2, \"ordererNodeId\": $OG2_ID, \"committerNodeId\": $CM2_ID},
{\"id\": $ORG3, \"ordererNodeId\": $OG3_ID, \"committerNodeId\": $CM3_ID},
{\"id\": $ORG4, \"ordererNodeId\": $OG4_ID, \"committerNodeId\": $CM4_ID}
]
}
}" | jq '.id'

Request body fields:

FieldMeaning
config.channelNameMust be "arma" — the only channel ID FabricX supports today.
config.localDevSet to true on macOS/Windows with Docker Desktop. See the local development note. Defaults to false.
config.organizations[].idOrganization ID from Step 1.
config.organizations[].ordererNodeIdOrderer-group node ID from Step 3 (or use ordererNodeGroupId for the ADR-0001 path).
config.organizations[].committerNodeIdCommitter node ID from Step 3. Optional.

Capture the returned network ID as $NETA_ID.

Step 5 — Join every node

This is the step that actually starts the containers. Each join writes the genesis block into the node's bind mount and calls StartNode.

NODE_IDS=$(curl -s -u "$AUTH" "$CL/api/v1/nodes?platform=FABRICX" \
| jq -r '.items[] | select(.name | startswith("netA-")) | .id')

for nid in $NODE_IDS; do
echo "Joining node $nid..."
curl -s -u "$AUTH" --max-time 240 \
-X POST "$CL/api/v1/networks/fabricx/$NETA_ID/nodes/$nid/join" \
| jq '.status'
done

Why --max-time 240: on macOS Docker Desktop the first container start under a cold bind-mount cache can take 60–120 seconds. After the first component warms the cache, subsequent joins succeed in seconds. Retry individually on timeout.

Verify all 8 nodes are running:

curl -s -u "$AUTH" "$CL/api/v1/nodes?platform=FABRICX" \
| jq '.items[] | {id, name, status}'

All should show "status": "RUNNING".

Step 6 — Create a namespace

curl -s -u "$AUTH" -X POST "$CL/api/v1/networks/fabricx/$NETA_ID/namespaces" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"token\",
\"submitterOrgId\": $ORG1,
\"waitForFinality\": true
}"

Expected response:

{ "id": 17, "status": "committed", "txId": "fa5670f38f45..." }

See Namespaces for the namespace lifecycle and postgres layout.

Running a second network on the same host

Repeat steps 2–6 with the netB port band and name: "netB". All resources are independent; only ports must be unique.

Reusing organizations across networks

If Party1MSP's org and CAs already exist from network A, just reuse $ORG1..$ORG4 in network B's orderer group, committer, and network requests. You don't need new orgs.

Container name collisions

Container names derive from the node's name field (netA-orderer-p1-router, etc.). As long as network A and network B node names differ (netA-* vs netB-*), Docker runs both sets side-by-side.

Bind-mount directory collisions

Bind mounts are keyed by node name under chaindeploy/data/fabricx-orderers/<node-name>/ and chaindeploy/data/fabricx-committers/<node-name>/. Distinct node names → distinct directories.

Tearing down a network

The built-in delete doesn't purge Docker state. You need to clean up manually:

# 1. Delete via API (drops DB rows).
curl -s -u "$AUTH" -X DELETE "$CL/api/v1/networks/fabricx/$NETA_ID"
for nid in $NODE_IDS; do
curl -s -u "$AUTH" -X DELETE "$CL/api/v1/nodes/$nid"
done

# 2. Remove containers.
docker ps -a --filter name=netA- -q | xargs -r docker rm -f

# 3. Remove bind mounts.
rm -rf chaindeploy/data/fabricx-orderers/netA-*
rm -rf chaindeploy/data/fabricx-committers/netA-*

# 4. Remove volumes if any.
docker volume prune -f

Skipping steps 2 or 3 will cause ABORTED_SIGNATURE_INVALID on the next rebuild, because committers resume from a stale ledger position against a freshly-regenerated genesis block.

Troubleshooting

SymptomLikely causeFix
dial ... context deadline exceeded on namespace createNetwork was created without localDev: true (macOS/Windows) and no global CHAINLAUNCH_FABRICX_LOCAL_DEVRecreate the network with "localDev": true in the config, or restart the server with the env var set.
invalid mount config ... bind source path does not existDocker Desktop cold cacheRetry --max-time 240; first component warms the cache.
ABORTED_SIGNATURE_INVALIDStale ledger from prior runManual teardown (above).
TLS handshake failure despite server updeployment_config.tlsCaCert drifted from the org's current CARecreate the affected node so it picks up the current CA.
Port already in useAnother network or service on the same hostPick a different port band.

Glossary

TermMeaning
PartyA participating organization. partyId is 1-indexed, max 10.
Orderer groupRouter + batcher + consenter + assembler — one per party.
CommitterSidecar + coordinator + validator + verifier + query-service + postgres — one per party.
AssemblerThe orderer-group component that committers pull blocks from.
RouterThe orderer-group entrypoint for client broadcasts.
ChannelAlways "arma" for Fabric-X as of this writing.
NamespaceA logical partition within a channel, maps to postgres table ns_<name>.

See also