Restore from Backup
How to recover your ChainLaunch instance from a backup.
What Gets Restored
- Database — all configuration, network definitions, operational state
- Node configurations — settings for all Fabric and Besu nodes
- Keys and certificates — cryptographic material
- Blockchain data — ledger and state data
Restoring overwrites existing ChainLaunch data. Stop the server before restoring.
Prerequisites
- ChainLaunch CLI installed
- Access to the backup storage (S3, EBS, or VMware)
- Credentials used when the backup was created
- Sufficient disk space
List Available Backups
curl -s http://localhost:8100/api/v1/backups | jq '.[] | {id, status, created_at, provider}'
Only restore from backups with status COMPLETED.
Restore from S3 (Restic)
Step 1: Stop ChainLaunch
sudo systemctl stop chainlaunch
Step 2: Run the Restore
export S3_ENDPOINT="https://s3.amazonaws.com" # or MinIO endpoint
export BUCKET_NAME="my-chainlaunch-backups"
export BUCKET_PATH="production/daily"
export AWS_ACCESS_KEY="AKIA..."
export AWS_SECRET_KEY="..."
export RESTIC_PASSWORD="your-restic-password"
export OUTPUT_PATH="$HOME/chainlaunch-restore"
chainlaunch backup restore \
--s3-endpoint="${S3_ENDPOINT}" \
--bucket-name="${BUCKET_NAME}" \
--bucket-path="${BUCKET_PATH}" \
--aws-access-key="${AWS_ACCESS_KEY}" \
--aws-secret-key="${AWS_SECRET_KEY}" \
--restic-password="${RESTIC_PASSWORD}" \
--output="${OUTPUT_PATH}"
Step 3: Move Restored Data
# Back up current data first
mv ~/.chainlaunch ~/.chainlaunch.old
# Move restored data into place
mv "${OUTPUT_PATH}/chainlaunch-restore/"* ~/.chainlaunch
Step 4: Start ChainLaunch
DB_FILE=$(ls -1 ~/.chainlaunch/dbs | head -1)
chainlaunch serve --port=8100 --db="$HOME/.chainlaunch/dbs/${DB_FILE}"
# Or if using systemd:
sudo systemctl start chainlaunch
Restore from EBS Snapshot
EBS snapshot restores are handled through AWS:
# 1. Find the snapshot
aws ec2 describe-snapshots --filters "Name=tag:managed_by,Values=chainlaunch" \
--query 'Snapshots[*].{ID:SnapshotId,Date:StartTime,Size:VolumeSize}' --output table
# 2. Create a volume from the snapshot
aws ec2 create-volume \
--snapshot-id snap-0123456789abcdef0 \
--availability-zone us-east-1a \
--volume-type gp3
# 3. Attach the volume to your instance
aws ec2 attach-volume \
--volume-id vol-0123456789abcdef0 \
--instance-id i-0123456789abcdef0 \
--device /dev/xvdf
# 4. Mount and copy data
sudo mkdir -p /mnt/restore
sudo mount /dev/xvdf /mnt/restore
sudo cp -r /mnt/restore/chainlaunch/* ~/.chainlaunch/
# 5. Clean up
sudo umount /mnt/restore
Verify the Restore
After restoring, verify everything is working:
# 1. Check ChainLaunch health
curl http://localhost:8100/api/v1/health
# 2. Check nodes are listed
curl http://localhost:8100/api/v1/nodes | jq '.[].name'
# 3. Check networks exist
curl http://localhost:8100/api/v1/networks | jq '.[].name'
# 4. Start blockchain nodes (they don't auto-start after restore)
curl -X POST http://localhost:8100/api/v1/nodes/{nodeId}/start
Verify Node Health
After starting nodes, check they're syncing:
# Fabric peer
curl http://localhost:8100/api/v1/nodes/{peerId} | jq '.status'
# Besu validator
curl -X POST http://localhost:8545 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
Restore to a Different Server
To migrate ChainLaunch to a new server:
- Install ChainLaunch on the new server:
curl -fsSL https://chainlaunch.dev/deploy.sh | bash - Stop ChainLaunch:
sudo systemctl stop chainlaunch - Run the restore command (same as above)
- Start ChainLaunch with the restored database
- Update any DNS records pointing to the old server
Docker container data for blockchain nodes is stored in Docker volumes. If you're migrating to a new server, you'll need to restore both ChainLaunch data and Docker volumes, or let nodes re-sync from peers.
Troubleshooting
"Restic password incorrect"
The Restic password must match exactly what was used when creating the backup. Check your backup target configuration:
curl http://localhost:8100/api/v1/backup-targets | jq '.[].name'
"No snapshots found"
Verify the bucket path matches where backups were stored:
# List snapshots in the repository
restic -r "s3:${S3_ENDPOINT}/${BUCKET_NAME}/${BUCKET_PATH}" snapshots
Nodes won't start after restore
If Docker containers were removed, ChainLaunch will recreate them when you start the nodes. If ports conflict:
# Check for port conflicts
sudo lsof -i :7051 # Fabric peer
sudo lsof -i :8545 # Besu RPC
Next Steps
- Backups to set up automated backup schedules
- Upgrade Guide — always back up before upgrading
- Troubleshooting for common issues