Performance Tuning
Sizing recommendations, benchmarking, and optimization tips for Fabric and Besu networks.
Server Sizing
Minimum Requirements by Use Case
| Use Case | CPU | RAM | Disk | Nodes |
|---|---|---|---|---|
| Development / PoC | 2 cores | 4 GB | 20 GB | 1-4 |
| Staging | 4 cores | 8 GB | 50 GB | 4-10 |
| Production (small) | 8 cores | 16 GB | 100 GB | 10-20 |
| Production (large) | 16+ cores | 32+ GB | 500+ GB | 20+ |
Per-Node Resource Guidelines
Fabric
| Component | CPU | RAM | Disk | Notes |
|---|---|---|---|---|
| Peer (LevelDB) | 1 core | 512 MB | 10 GB+ | Grows with ledger size |
| Peer (CouchDB) | 2 cores | 1 GB | 20 GB+ | CouchDB needs more resources |
| Orderer (Raft) | 0.5 core | 256 MB | 5 GB | Low resource, I/O bound |
| CA | 0.25 core | 128 MB | 1 GB | Lightweight |
| CouchDB | 1 core | 1 GB | 10 GB+ | Per peer with rich queries |
Besu
| Component | CPU | RAM | Disk | Notes |
|---|---|---|---|---|
| Validator (QBFT) | 2 cores | 4 GB | 20 GB+ | Grows with chain history |
| Bootnode | 0.5 core | 1 GB | 5 GB | Lightweight |
| Fullnode (RPC) | 2 cores | 4 GB | 50 GB+ | Needs fast disk for queries |
Fabric Optimization
Peer Performance
Block size tuning — larger blocks = higher throughput but higher latency:
# In channel config (via Terraform or API)
batch_timeout = "2s" # Time to wait before cutting a block
max_message_count = 500 # Max transactions per block
preferred_max_bytes = 524288 # 512 KB preferred block size
absolute_max_bytes = 10485760 # 10 MB absolute max
| Profile | batch_timeout | max_message_count | Throughput | Latency |
|---|---|---|---|---|
| Low latency | 0.5s | 10 | ~50 TPS | <1s |
| Balanced | 2s | 500 | ~200 TPS | 2-3s |
| High throughput | 5s | 2000 | ~500 TPS | 5-6s |
State database choice:
| Database | Read speed | Rich queries | Resource usage |
|---|---|---|---|
| LevelDB | Fast | No (key-only) | Low |
| CouchDB | Moderate | Yes (JSON queries) | Higher |
Use LevelDB unless you need rich queries on chaincode state.
Gossip tuning — for large networks (10+ peers):
# Environment variables on peer containers
CORE_PEER_GOSSIP_DIALTIMEOUT=3s
CORE_PEER_GOSSIP_ALIVEEXPIRATIONTIMEOUT=25s
CORE_PEER_GOSSIP_RECONNECTINTERVAL=25s
CORE_PEER_GOSSIP_ELECTION_LEADERALIVEPERIOD=10s
Orderer Performance
Raft settings for production:
tick_interval = "500ms" # Base time unit
election_tick = 10 # 5s election timeout
heartbeat_tick = 1 # 500ms heartbeat
max_inflight_blocks = 5 # Pipelining depth
snapshot_interval_size = 20971520 # 20 MB snapshot interval
For multi-region deployments, increase tick_interval to 1000ms and election_tick to 20 to account for network latency.
Besu Optimization
Block Period
Shorter block periods = faster transaction confirmation but more chain growth:
| Block period | Confirmation time | Chain growth/day | Use case |
|---|---|---|---|
| 1s | ~1s | ~2.5 GB | High-frequency trading |
| 2s | ~2s | ~1.2 GB | Interactive apps |
| 5s | ~5s | ~500 MB | General purpose (default) |
| 15s | ~15s | ~170 MB | Low-frequency, storage-limited |
JVM Tuning
Besu runs on the JVM. For large networks:
# Set via environment variable on the node
BESU_OPTS="-Xmx4g -Xms4g -XX:+UseG1GC -XX:MaxGCPauseMillis=100"
| Network size | Heap (-Xmx) | Notes |
|---|---|---|
| 1-4 validators | 2g | Default |
| 5-10 validators | 4g | Recommended |
| 10+ validators | 8g | Monitor GC pauses |
RPC Performance
For nodes serving JSON-RPC queries:
{
"rpc-http-max-connections": 80,
"rpc-ws-max-connections": 80,
"rpc-http-max-batch-size": 100
}
Docker Resource Limits
Set resource limits on node containers to prevent one node from starving others:
# Via ChainLaunch API when creating a node
curl -X POST http://localhost:8100/api/v1/nodes \
-H "Content-Type: application/json" \
-d '{
"name": "peer0-org1",
"platform": "FABRIC",
"nodeType": "FABRIC_PEER",
"resources": {
"cpuLimit": "2",
"memoryLimit": "2Gi"
}
}'
Disk Performance
Blockchain nodes are I/O intensive. Use SSDs (NVMe preferred) for:
- Peer ledger storage (Fabric)
- Chain data directory (Besu)
- CouchDB data directory
Benchmark your disk:
# Sequential write speed (should be > 200 MB/s)
dd if=/dev/zero of=/tmp/test bs=1M count=1024 oflag=direct 2>&1 | tail -1
# Random I/O (should be > 5000 IOPS)
fio --name=randwrite --ioengine=libaio --rw=randwrite --bs=4k \
--numjobs=1 --size=256M --runtime=10 --direct=1 2>&1 | grep IOPS
Monitoring for Performance
Key metrics to watch:
| Metric | Healthy | Warning | Action |
|---|---|---|---|
| Block commit time | <2s | >5s | Increase resources or reduce block size |
| Endorsement latency | <500ms | >2s | Check peer CPU/memory |
| Peer count | Expected | <expected | Check network/gossip config |
| Disk usage growth | Predictable | Accelerating | Plan storage expansion |
| Container CPU | <70% | >85% | Scale up or add nodes |
| Container memory | <80% | >90% | Increase limits |
See Configure Monitoring for Prometheus + Grafana setup.
Next Steps
- Configure Monitoring to track performance metrics
- Architecture for system design overview
- Troubleshooting for diagnosing performance issues