10 min read

2025 Bitcoin Node Performance Tests

Testing full validation sync performance of 5 Bitcoin node implementations.
2025 Bitcoin Node Performance Tests

As I’ve noted many times in the past, backing your bitcoin wallet with a fully validating node gives you the strongest security model and privacy model that is available to Bitcoin users. Seven years ago I started running an annual comprehensive comparison of various implementations to see how well they performed full blockchain validation. Now it's time to see what has changed since my last tests!

The computer I use as a baseline was high-end off-the-shelf hardware back in 2018. It cost about $2,000 at the time. You'd be able to build it today for around $600. It's worth noting that my test results are by no means indicative of what the fastest possible sync times are for a given implementation: if you sync on a newer computer with components built in the past few years, you should expect it to be significantly faster than my benchmark machine. I'm only continuing to use this machine in order to ensure consistency over the years so that each year's results are directly comparable to previous tests.

Note that no Bitcoin implementation strictly fully validates the entire chain history by default. As a performance improvement, most of them don’t validate signatures before a certain point in time. This is considered safe because those blocks and transactions are buried under so much proof of work that it's economically impractical for them to be faked. In order for someone to create a blockchain that had invalid transactions before that point in time would cost so much mining resources, it would fundamentally break certain security assumptions upon which the network operates.

For the purposes of these tests I need to control as many variables as possible; some implementations may skip signature checking for a longer period of time in the blockchain than others. As such, the tests I'm running do not use the default settings - I change one setting to force the checking of all transaction signatures and I often tweak other settings in order to make use of the higher number of CPU cores and amount of RAM on my machine.

Also, in order to ensure that the bandwidth of peers on the public network is not a bottleneck and potential source of inconsistency, I sync between two nodes on my local network for the implementations (most of them) that aren't greedy for peer bandwidth.

The amount of data in the Bitcoin blockchain is relentlessly increasing with every block that is added, thus it's a never-ending struggle for node implementations to continue optimizing their code in order to prevent the initial sync time for a new node from becoming obscenely long. After all, if it becomes unreasonably expensive or time consuming to start running a new node, more people who are interested in doing so will chose not to, which centralizes and weakens the overall robustness of the network.

My previous performance tests were for syncing to block 819,000 while this year's is syncing to block 928,000. During that 2 year period, the total size of the blockchain increased 34% from 530GB to 707GB. As such, we should expect implementations that have made no performance changes to take about 34% longer to sync than 2 years ago.

What's the absolute best case syncing time we could expect if my machine had limitless bandwidth and disk I/O? Since you have to perform 3.238 billion ECDSA verification operations in order to reach block 928,000 and it takes my machine about 5,130 nanoseconds per operation via libsecp256k1... it would take my machine 4.6 hours to verify the entire blockchain if bandwidth and disk I/O were not bottlenecks.

On to the results!


Bitcoin Core 30.0

Bitcoin Core is by far the most well maintained client implementation and it has the most reliable release cadence - every 6 months or so. Version 30 came out 2 months ago.

The full sync used:

  • 18.1 GB RAM
  • 179 GB disk reads
  • 1.2 TB GB disk writes
  • 690 GB downloaded

My bitcoind.conf:

connect=<local node IP address>
assumevalid=0
dbcache=24000
disablewallet=1

btcd v0.25.0

btcd just cut a release last month.

The full sync used:

  • 13.4 GB RAM
  • 6.1 TB disk reads
  • 11.4 TB disk writes
  • 690 GB download bandwidth

I noted that it still doesn't really use more than half of the available CPU cycles, so I think there's still some low hanging fruit to grab. I think the "utxocachemaxsize" config parameter was new since my last test; I noted that btcd ended up using far more RAM and significantly less disk I/O as a result.

My btcd.conf:

nocheckpoints=1
sigcachemaxsize=1000000
utxocachemaxsize=24000

connect=<local node IP address>

Gocoin 1.11.0

Gocoin also published a new release last month.

Interestingly, it crashed after 666 minutes. At its current pace, if gocoin hadn't crashed, it would have completed syncing in 13 hours.

I made several changes to get maximum performance according to the documentation:

  • Installed secp256k1 library and built gocoin with sipasec.go
  • set several new configs noted below

My gocoin.conf:

LastTrustedBlock: 00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048
AllBalances.AutoLoad:false
UTXOSave.SecondsToTake:0
Stats.NoCounters:true
Memory.CacheOnDisk:false
Memory.GCPercTrshold: -1
Memory.MemoryLimitMB: 24000
Memory.UseGoHeap: true
MaxSyncCacheMB: 6000
ConnectOnly:<local node IP address>

After I experienced the initial crash due to running out of memory, I tried re-syncing with several different configuration parameters like reducing the memory limits and using disk caching to try to get past this point, but gocoin appears to be a very memory-hungry implementation and it kept crashing.

Gocoin still has the best dashboard of any node implementation.

In the first 11 hours 6 min before crashing, gocoin used:

  • 30.5 GB RAM
  • 589 GB download bandwidth
  • 513 GB disk reads
  • 1.2 TB disk writes

Gocoin remains on par with Bitcoin Core's performance, though I'd note that it's maintained by a single developer without peer review so you should be wary running it as a production service.

Libbitcoin Node 4.0.0

We've been waiting several years for the 4.0.0 release; although it's not officially published yet, I built from master branch commit 63b20361. The first time I tried to build it I got an error, but I quickly determined that the problem was that I was compiling in CPU extensions that my hardware didn't support. So I ended up building libbitcoin with:

CFLAGS="-msse4.1 -mavx2 -O3" CXXFLAGS="-msse4.1 -mavx2 -O3" ./install.sh --prefix=/path/to/libbitcoin/build/ --build-dir=/path/to/libbitcoin/build/temp --build-secp256k1 --build-boost --disable-shared --enable-ndebug --enable-isystem --enable avx2 --enable-sse41

Unlike all the other tests, I did NOT sync libbitcoin from a node on my local network, but rather allowed it to use the default network configuration. This is because libbitcoin has massively reworked its syncing logic and is now incredibly greedy when it comes to finding the best peers with the most available bandwidth; it would actually sync slower if I only connected it to a single local peer.

My libbitcoin config:

[bitcoin]
checkpoint = 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f:0
milestone = 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f:0
[node]
maxheight = 928000

The first few times I tried to do a sync with full defaults, it got to around block 600,000 after 5 hours and then the OS killed the process due to running out of memory. I found this to be odd because my (Ubuntu) OS was not reporting the libbitcoin process to actually be using more than a gigabyte of RAM.

After discussion with Eric Voskuil, he explained that the bottleneck is the operating system dirty page configuration. Windows dynamically adjusts based on demand, but Linux and macOS do not. Thankfully, Linux is at least configurable. With default settings, Linux starts forcing the process to flush to disk as if it was out of RAM, at between 10 and 20% of RAM in use, thus and grinding a 32 GB RAM machine to a halt. Eric noted that they consider 32GB RAM to be the bare minimum for running libbitcoin.

These are libbitcoin's recommended OS tweaks:

sudo sysctl vm.dirty_ratio=90
sudo sysctl vm.dirty_background_ratio=90
sudo sysctl vm.dirty_expire_centisecs=12000
sudo sysctl vm.dirty_writeback_centisecs=12000

During the course of syncing, Libbitcoin Node used:

  • 112 TB disk reads
  • 2.9 TB disk writes
  • 1.5 GB RAM
  • 710 GB download bandwidth

During full validation sync my CPU was pegged the entire time, so it was clearly the bottleneck. Eric pointed out that a major contributor to the slow full validation performance is my CPU. As a result of it only having 6 cores / 12 threads, the memory mapped database ends up getting massively thrashed, which is obvious from the insane level of disk reads. If I had a more modern CPU with more cores and that supported the SHA instruction set, it wouldn't be doing so much reading from disk and would likely be 2 to 3 times faster at syncing.

I also ended up doing a sync with the default configuration that only validates signatures after block 900,000 to compare the performance, given that this update is a massive rewrite in libbitcoin's architecture for which we've been waiting many years. It completed in 1 hour 43 minutes and used:

  • 23.5 GB disk reads
  • 2.5 TB disk writes
  • 1.1 GB RAM
  • 707 GB download bandwidth

During the default validation sync, my bandwidth was maxed out the entire time, so it was clearly the bottleneck. Libbitcoin v4 has made many optimizations, including being particularly greedy about finding the best peers. By default it connects to 100 peers (10 times as many as Bitcoin Core,) connecting 5 at a time to get connections quickly (on average only 1 in 5 addresses in the pool tend to be usable). It also drops underperforming peers to find the faster ones, using a standard deviation-based algorithm.

Mako

It's been about 20 months since any code changes were committed to Mako.

Mako is an implementation by Chris Jeffrey that's written in C. In order to create a production optimized build I built the project via:

cmake . -DCMAKE_BUILD_TYPE=Release

And ran it with makod -dbcache=2048 -checkpoints=0 -connect=<LAN node> -maxconnections=1

Mako only made a few minor code changes since my last round of tests. CPU usage was only around 50%, presumably because it's only seeing physical cores rather than logical cores. I did note that the dbcache option is new, though I found it odd that the max dbcache is 2 GB.

During the course of syncing, Mako used:

  • 2.4 GB RAM
  • 113 MB disk reads
  • 7.3 TB disk writes
  • 690 GB download bandwidth

Untested Client Implementations

There are other Bitcoin full node clients available, but I did not test them for various reasons.

  • Bcoin - unmaintained for 4+ years.
  • Bitcoin Knots - tested this in previous years but there's no perceptible performance difference from Bitcoin Core, and it's usually 1 or 2 releases behind Bitcoin Core.
  • Blockcore - tested for several years but kept crashing, seems to be focused on being a Stratis altcoin implementation.
  • Floresta - I tried testing this and it seems to have the configuration options to force a full historical validation, but for some reason it will only connect to peers that support utreexo even if you disable utreexo syncing.
  • Parity Bitcoin - unmaintained for 4+ years.
  • Zebra BTC - unmaintained for 4+ years.

Performance Rankings

  1. Bitcoin Core 30.0: 12 hours, 7 minutes
  2. Gocoin 1.11.0: 13 hours, 0 minutes (projected)
  3. Libbitcoin Node 4.0.0: 20 hours, 51 minutes
  4. Mako 41ef1040: 1 day, 18 hours, 4 minutes
  5. BTCD 0.25.0: 3 days, 11 hours, 28 minutes

Delta vs Last Round of Tests

Remember that the total size of the blockchain has grown by 34% since my last round of tests, thus we would expect that a node with no new performance improvements or bottlenecks should take ~34% longer to sync.

  1. Libbitcoin Node 4.0.0: -5 days, 19 hours, 45 minutes (87% shorter)
  2. BTCD 0.25.0: +3 hours, 32 minutes (4.4% longer)
  3. Bitcoin Core 30.0: +3 hours, 29 minutes (40.3% longer)
  4. Gocoin 1.11.0: +4 hours, 18 minutes (49.4% longer)
  5. Mako 41ef1040: +14 hours, 18 minutes (51.5% longer)

As we can see, most of the clients took more than the expected 34% longer to sync. However, 2 clients implemented massive performance improvements to effectively negate the additional data processing requirements as the blockchain grew over the past 2 years.


Exact Comparisons Are Difficult

While I ran each implementation on the same hardware and synced against a local network peer to keep those variables as consistent as possible, there are other factors that come into play.

  1. Not all implementations have the same type or effective amount of caching; for example Mako only supports up to 2 GB while Libbitcoin Node doesn't do explicit caching and rather relies upon the OS to properly manage caching for the memory mapped database.
  2. Not all nodes perform the same indexing functions. For example, Libbitcoin Node always indexes all transactions by hash  -  it's inherent to the database structure. Thus this full node sync is more properly comparable to Bitcoin Core with the transaction indexing option enabled.
  3. Your mileage may vary due to any number of other variables such as operating system and file system performance. This became much more apparent with Libbitcoin Node this year, as I was able to make massive performance gains by tweaking a few operating system memory management configurations.

Conclusion

Given that the strongest security model a user can obtain in a public permissionless crypto asset network involved fully validating the entire history themselves, I think it’s important that we keep track of the resources required to do so.

We know that due to the nature of blockchains, the amount of data that needs to be validated for a new node that is syncing from scratch will relentlessly continue to increase over time. The tests I run are on the same hardware each year, but on the bright side we do know that hardware performance per dollar will also continue to increase each year. In other words, just because some implementations take 50% longer to sync than my previous tests does not mean that running a node is 50% more expensive.

It's important that we ensure the resource requirements for syncing a node do not outpace the hardware performance that is available at a reasonable cost. If they do, then larger and larger swaths of the populace will be priced out of self sovereignty in these systems.