Skip to main content

Consensus Clients

ClientVersionDateDB SizeRAMNotes
Teku23.12.1Jan 2024~130 GiB~10 GiB
Lighthouse4.5.0Jan 2024~130 GiB~5 GiB
Nimbus24.1.1Jan 2024~130 GiB~2 to 3 GiB
Prysm4.1.1Jan 2024~130 GiB~5 GiB
Lodestar1.13.0Jan 2024~130 GiB~8 GiB

Notes on disk usage

  • When disk usage grows, you can checkpoint resync the client in minutes to bring it back down. Auto-pruning is in the works for most (all?) clients as of early 2024, but not yet released.

Execution clients

For reference, here are disk, RAM and CPU requirements, as well as mainnet initial synchronization times, for different Ethereum execution clients.

Disk, RAM, CPU requirements

SSD, RAM and CPU use is after initial sync, when keeping up with head. 100% CPU is one core.

Please pay attention to the Version and Date. These are snapshots in time of client behavior. Initial state size increases over time, and execution clients are always working on improving their storage engines.

ClientVersionDateDB SizeDB GrowthRAMNotes
Geth1.13.8Jan 2024~1.1 TiB~7-8 GiB / week~ 8 GiBwith PBSS
Nethermind1.25.0Jan 2024~1.1 TiB~25-30 GiB / week~ 7 GiBCan automatic online prune at ~350 GiB free
Besuv23.10.3-hotfixJan 2024~1.1 TiB~7-8 GiB / week~ 10 GiBwith Bonsai and trie log limit
Rethalpha.13Jan 2024~1.1 TiB~ 3.5 GiB / week~ 9 GiBthrows away all logs except deposit contract, and so grows more slowly
Erigon2.56.1Jan 2024~1.7 TiB~7-8 GiB / weekSee commentErigon will have the OS use all available RAM as a DB cache during post-sync operation, but this RAM is free to be used by other programs as needed. During sync, it may run out of memory on machines with less than 32 GiB

Notes on disk usage

  • Geth - continuously prunes when synced with PBSS
  • Besu - can continuously prune its trie log, and continuously prunes state with BONSAI
  • Nethermind - DB size can be reduced when it grew too large, by online prune. Keep an eye on Paprika
  • Erigon does not compress its DB, leaving that to the filesystem

Test Systems

IOPS is random read-write IOPS measured by fio with "typical" DB parameters, 150G file, without other processes running.

Specifically fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75, then rm test to get rid of the 150G test file. If the test shows it'd take hours to complete, feel free to cut it short once the IOPS display for the test looks steady.

150G was chosen to "break through" any caching strategems the SSD uses for bursty writes. Execution clients write steadily, and the performance of an SSD under heavy write is more important than its performance with bursty writes.

Read and write latencies are measured with sudo iostat -mdx 240 2 during Geth sync, look at r_await and w_await of the second output block.

Servers have been configured with noatime and no swap to improve available IOPS.

NameRAMSSD SizeCPUr/w IOPSr/w latencyNotes
Homebrew Xeon ZFS zvol32 GiB1.2 TiBIntel Quad3.5k/1kIntel SATA SSD, 16k recordsize, stripe, xfs; fio with --bs=16k
Homebrew Xeon ZFS dataset32 GiB1.2 TiBIntel Quad1.2k/500Intel SATA SSD, 16k recordsize, stripe, xfs; 16G Optane SLOG
Dell R420 w/ HBA32 GiB1 TBDual Intel Octo35.9k/11kXeon E5-2450
Contabo Storage VPS L16 GiB1600 GiBAMD EPYC Hexa3k/1k
Netcup VPS 3000 G924 GiB600 GiBAMD Hexa11.2k/3.7k2.25/6 ms
Netcup RS 8000 G9.564 GiB2 TBAMD EPYC 770215.6k/5k3.4/1.5 ms
OVH Baremetal NVMe32 GiB1.9 TBIntel Hexa177k/59k0.08/3.5 ms
AWS io1 w/ 10K IOPS8 GiBNAIntel Dual7.6k/2.5kt2.large, could not sync Geth. Note t2 throttles CPU
AWS gp3 w/ 16K IOPS16 GiBNAIntel Quad12.2k/4.1km6i.xlarge

Initial sync times

Please pay attention to the Version and Date. These are snapshots in time of client behavior.

NB: All execution clients need to download state after getting blocks. If state isn't "in" yet, your sync is not done. This is a heavily disk IOPS dependent operation, which is why HDD cannot be used for a node.

For Nethermind, seeing "branches" percentage reset to "0.00%" after state root changes with "Setting sync state root to" is normal and expected. With sufficient IOPS, the node will "catch up" and get in sync.

For Geth, you will see "State heal in progress" after initial sync, which will persist for a few hours if IOPS are low-ish.

This should complete in under 4 hours. If it does not, or even goes on for a week+, you do not have sufficient IOPS for Geth to "catch up" with state.

Cache size default in all tests.

ClientVersionDateTest SystemTime TakenNotes
Geth1.13.0August 2023OVH Baremetal NVMe~ 6 hours
Nethermind1.24.0Jan 2024OVH Baremetal NVMe~ 5 hoursReady to attest after ~ 1 hour
Besuv23.10.4-devDecember 2023OVH Baremetal NVMe~ 16 hoursWith X_SNAP sync
Erigon2.48.1August 2023OVH Baremetal NVMe~ 9 days
Rethbeta.1March 2024OVH Baremetal NVMe~ 2 days 16 hours

Getting better IOPS

Ethereum execution layer clients need a decent amount of IOPS. HDD will not be sufficient.

For cloud providers, here are some results for syncing Geth.

  • AWS, gp2 or gp3 with provisioned IOPS have both been tested successfully.
  • Linode block storage, make sure to get NVMe-backed storage.
  • Netcup is sufficient as of late 2021.
  • There are reports that Digital Ocean block storage is too slow, as of late 2021.
  • Strato V-Server is too slow as of late 2021.

Dedicated servers with SATA or NVMe SSD will always have sufficient IOPS. Do avoid hardware RAID though, see below. OVH Advance line is a well-liked dedicated option; Linode or Strato or any other provider will work as well.

For own hardware, we've seen three causes of low IOPS:

  • DRAMless or QLC SSD. Choose a "mainstream" SSD with TLC and DRAM. Enterprise / data center SSDs will always work great; consumer SSDs vary.
  • Overheating of the SSD. Check smartctl -x. You want the SSD to be at ~ 40-50 degrees Celsius, so it does not throttle.
  • Hardware RAID, no TRIM support. Flash the controller to HBA and use software RAID.