Ethereum Node Requirements
Revision as of 20:23, 27 February 2019 by imported>Jeremy-busk
Parity
Dec 2018
Client / Mode | Block Number | Disk Space | CLI flags | =========================|================|============|==============================| parity +light +hardcoded | 6_850_000 | 14M | --light | parity +light | 5_600_000 | 89M | --light ---no-hardcoded-sync | parity +warp -ancient | 6_850_000 | 29G | --no-ancient-blocks | parity +warp | 6_850_000 | 133G | | parity -warp | 6_850_000 | 133G | --no-warp | parity -warp +archive | 6_850_000 | 1.8T | --pruning archive |
Geth
3 ways you can do sync to network: --syncmode full: Geth client will download Block header + Block data + full Validation [Is called eth full node] --syncmode fast: Geth client will download Block header + Block data + validate for last 1k transactions. --syncmode light: Geth client will download Current state + Asks nodes for as its need. [Light node [It will request missing blocks from full nodes] You can change syncmode my specifying --syncmode along with command prompt. fast is good. But if you dont have time and space try to use light.
https://ethereum.stackexchange.com/questions/65509/2019-specs-for-running-geth-full-node
<br />62 I'll take my shot. Experts, please correct me. "Full" Sync: Gets the block headers, the block bodies, and validates every element from genesis block. Fast Sync: Gets the block headers, the block bodies, it processes no transactions until current block - 64(*). Then it gets a snapshot state and goes like a full synchronization. Light Sync: Gets only the current state. To verify elements, it needs to ask to full (archive) nodes for the corresponding tree leaves. EDIT (*) in newer version of geth it's -64 fsMinFullBlocks = 64 // Number of blocks to retrieve fully even in fast sync A Geth node with Fast sync is around 130GB (source). However, according to this article, once Geth is done with Fast sync, it switches to full (archive) sync. With a Parity Archive node approaching 2TB (source) you can expect at least that much in disk-space (SSD with high i/o). Running a stable node is a challenge, so you may want to look into a service like QuikNode (cloud node-as-a-service), which offers Archive nodes w/ full chain data since genesis block.