Skip to main content

Performance Tuning

Sizing recommendations, benchmarking, and optimization tips for Fabric and Besu networks.

Server Sizing

Minimum Requirements by Use Case

Use CaseCPURAMDiskNodes
Development / PoC2 cores4 GB20 GB1-4
Staging4 cores8 GB50 GB4-10
Production (small)8 cores16 GB100 GB10-20
Production (large)16+ cores32+ GB500+ GB20+

Per-Node Resource Guidelines

Fabric

ComponentCPURAMDiskNotes
Peer (LevelDB)1 core512 MB10 GB+Grows with ledger size
Peer (CouchDB)2 cores1 GB20 GB+CouchDB needs more resources
Orderer (Raft)0.5 core256 MB5 GBLow resource, I/O bound
CA0.25 core128 MB1 GBLightweight
CouchDB1 core1 GB10 GB+Per peer with rich queries

Besu

ComponentCPURAMDiskNotes
Validator (QBFT)2 cores4 GB20 GB+Grows with chain history
Bootnode0.5 core1 GB5 GBLightweight
Fullnode (RPC)2 cores4 GB50 GB+Needs fast disk for queries

Fabric Optimization

Peer Performance

Block size tuning — larger blocks = higher throughput but higher latency:

# In channel config (via Terraform or API)
batch_timeout = "2s" # Time to wait before cutting a block
max_message_count = 500 # Max transactions per block
preferred_max_bytes = 524288 # 512 KB preferred block size
absolute_max_bytes = 10485760 # 10 MB absolute max
Profilebatch_timeoutmax_message_countThroughputLatency
Low latency0.5s10~50 TPS<1s
Balanced2s500~200 TPS2-3s
High throughput5s2000~500 TPS5-6s

State database choice:

DatabaseRead speedRich queriesResource usage
LevelDBFastNo (key-only)Low
CouchDBModerateYes (JSON queries)Higher

Use LevelDB unless you need rich queries on chaincode state.

Gossip tuning — for large networks (10+ peers):

# Environment variables on peer containers
CORE_PEER_GOSSIP_DIALTIMEOUT=3s
CORE_PEER_GOSSIP_ALIVEEXPIRATIONTIMEOUT=25s
CORE_PEER_GOSSIP_RECONNECTINTERVAL=25s
CORE_PEER_GOSSIP_ELECTION_LEADERALIVEPERIOD=10s

Orderer Performance

Raft settings for production:

tick_interval = "500ms"          # Base time unit
election_tick = 10 # 5s election timeout
heartbeat_tick = 1 # 500ms heartbeat
max_inflight_blocks = 5 # Pipelining depth
snapshot_interval_size = 20971520 # 20 MB snapshot interval
tip

For multi-region deployments, increase tick_interval to 1000ms and election_tick to 20 to account for network latency.

Besu Optimization

Block Period

Shorter block periods = faster transaction confirmation but more chain growth:

Block periodConfirmation timeChain growth/dayUse case
1s~1s~2.5 GBHigh-frequency trading
2s~2s~1.2 GBInteractive apps
5s~5s~500 MBGeneral purpose (default)
15s~15s~170 MBLow-frequency, storage-limited

JVM Tuning

Besu runs on the JVM. For large networks:

# Set via environment variable on the node
BESU_OPTS="-Xmx4g -Xms4g -XX:+UseG1GC -XX:MaxGCPauseMillis=100"
Network sizeHeap (-Xmx)Notes
1-4 validators2gDefault
5-10 validators4gRecommended
10+ validators8gMonitor GC pauses

RPC Performance

For nodes serving JSON-RPC queries:

{
"rpc-http-max-connections": 80,
"rpc-ws-max-connections": 80,
"rpc-http-max-batch-size": 100
}

Docker Resource Limits

Set resource limits on node containers to prevent one node from starving others:

# Via ChainLaunch API when creating a node
curl -X POST http://localhost:8100/api/v1/nodes \
-H "Content-Type: application/json" \
-d '{
"name": "peer0-org1",
"platform": "FABRIC",
"nodeType": "FABRIC_PEER",
"resources": {
"cpuLimit": "2",
"memoryLimit": "2Gi"
}
}'

Disk Performance

Blockchain nodes are I/O intensive. Use SSDs (NVMe preferred) for:

  • Peer ledger storage (Fabric)
  • Chain data directory (Besu)
  • CouchDB data directory

Benchmark your disk:

# Sequential write speed (should be > 200 MB/s)
dd if=/dev/zero of=/tmp/test bs=1M count=1024 oflag=direct 2>&1 | tail -1

# Random I/O (should be > 5000 IOPS)
fio --name=randwrite --ioengine=libaio --rw=randwrite --bs=4k \
--numjobs=1 --size=256M --runtime=10 --direct=1 2>&1 | grep IOPS

Monitoring for Performance

Key metrics to watch:

MetricHealthyWarningAction
Block commit time<2s>5sIncrease resources or reduce block size
Endorsement latency<500ms>2sCheck peer CPU/memory
Peer countExpected<expectedCheck network/gossip config
Disk usage growthPredictableAcceleratingPlan storage expansion
Container CPU<70%>85%Scale up or add nodes
Container memory<80%>90%Increase limits

See Configure Monitoring for Prometheus + Grafana setup.

Next Steps